url
stringlengths
14
1.76k
text
stringlengths
100
1.02M
metadata
stringlengths
1.06k
1.1k
https://blog.argentgames.co/post/2019-03-15-markus-route-one-third-complete/
Markus’ route is now nearly 1/3 complete! Read a script snippet and check out some art teasers~ # Markus Route Update Markus’ route is just about 1/3 written! After some initial outlining, progress has sped up considerably. You can expect to see a number of interesting references to other routes within Markus’, although they may not unravel completely until you’ve finished the special endings… # Sprite Teasers The end is finally in sight for sprites–there are just a few backer characters and some final expressions to do. Exciting, right? For now, let’s take a look at the WIP sketches for a couple of new backer characters! Like the previous pair, these two will pop up for a chat or two in-game, adding some more dynamic personalities to the vampiric underworld. And then, a finely-groomed Iscari agent who may appear now and then, enforcing his employer’s requests… We have quite a few questions from our Ask Box for you this week! Feel free to send in your queries about anything AG related. Do you plan to release additional (minor/side) character routes via DLC for RE:H later? A: Most likely not, but we may consider it based on how some characters are received. The two major female characters would be the most likely to get side routes, but Saorise’s would be purely platonic/friendship focused. Can I ask how big RE:H’s cast is? A: There are eight major characters (excluding the MC), ten minor/supporting characters (with sprites), and several more who don’t appear on screen but have a role in the story. You probably won’t (and maybe can’t?) meet all of them every playthrough, so remember to hang out at different spots~ Are vampires capable of surviving under water? A: Yes! Vampires in the RE:H universe have absolutely no need to breathe, although many of them continue to mime it (including sighs, gasps, etc.) either consciously or subconsciously. A permanently soggy vampire could potentially get pretty gross, though. Do you know how much is Red Embrace: Hollywood going to cost or not yet? A: We’re currently guessing around \$17 or \$18, but as the word count and total game length are still TBA, that estimate isn’t set in stone yet. On mobile, all the routes (and possibly route chapters) will have the option to be individually purchased. Would it be too spoilery to ask what kind of NSFW things to expect in the different routes? A: The NSFW content in RE:H is not particularly kink-focused–or at least, not in any traditional sense. It is, however, definitely unique, and the player’s perception of the scenes may differ greatly based on their choices… And let’s not forget, two corpses flopping against each other is a recipe for an interesting time. When is a “rogue vampire” considered a “rogue vampire”? Our sire is called that in the demo by Saorise. A: When Saorise says a “rogue vampire” in the demo, she’s referring to any vampire who isn’t affiliated with her organization (without going into technicalities, since MC is still new). Prior to the events of RE:H, the vampiric Houses of L.A. were all ruled by Saorise, including the Mavvar and Golgotha. Even after the war broke out, some Mavvar and Golgotha remained loyal to her, which is why the Iscari organization remains so strong. Therefore, Saorise sees all the vampires who decided to rebel (or claim to be unaffiliated) as having “gone rogue,” as they technically are the minority. There’s no official definition across all vampiric society, though. For example, in the original RE, Bishop would’ve seen any vampire in San Fran outside of his official coven as a “rogue.”
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2626761198043823, "perplexity": 6137.76007059552}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300244.42/warc/CC-MAIN-20220116210734-20220117000734-00143.warc.gz"}
http://lucatrevisan.wordpress.com/tag/unique-games/
You are currently browsing the tag archive for the ‘unique games’ tag. [I am preparing a survey talk on Unique Games for a mathematics conference, and writing a survey paper for a booklet that will be distributed at the conference. My first instinct was to write a one-line paper that would simply refer to Subhash's own excellent survey paper. Fearing that I might come off as lazy, I am instead writing my own paper. Here are some fragments. Comments are very welcome.] 1. Why is the Unique Games Conjecture Useful In a previous post we stated the Unique Games Conjecture and made the following informal claim, here rephrased in abbreviated form: To reduce Label Cover to a graph optimization problem like Max Cut, we map variables to collections of vertices and we map equations to collections of edges; then we show how to “encode” assignments to variables as 2-colorings of vertices which cut a ${\geq c_1}$ fraction of edges, and finally (this is the hardest part of the argument) we show that given a 2-coloring that cuts a ${\geq c_2}$ fraction of edges, then 1. the given 2-coloring must be somewhat “close” to a 2-coloring coming from the encoding of an assignment and 2. if we “decode” the given 2-coloring to an assignment to the variables, such an assignment satisfies a noticeable fraction of equations. Starting our reduction from a Unique Game instead of a Label Cover problem, we only need to prove (1) above, and (2) more or less follows for free. To verify this claim, we “axiomatize” the properties of a reduction that only achieves (1): we describe a reduction mapping a single variable to a graph, such that assignments to the variable are mapped to good cuts, and somewhat good cuts can be mapped back to assignments to the variable. The reader can then go back to our analysis of the Max Cut inapproximability proof in the previous post, and see that the properties below are sufficient to implement the reduction. [I am preparing a survey talk on Unique Games for a mathematics conference, and writing a survey paper for a booklet that will be distributed at the conference. My first instinct was to write a one-line paper that would simply refer to Subhash's own excellent survey paper. Fearing that I might come off as lazy, I am instead writing my own paper. Here is part 1 of some fragments. Comments are very welcome.] Khot formulated the Unique Games Conjecture in a remarkably influential 2002 paper. In the subsequent eight years, the conjecture has motivated and enabled a large body of work on the computational complexity of approximating combinatorial optimization problems (the original context of the conjecture) and on the quality of approximation provided by “semidefinite programming” convex relaxations (a somewhat unexpected byproduct). Old and new questions in analysis, probability and geometry have played a key role in this development. Representative questions are: • The central limit theorem explains what happens when we sum several independent random variables. What happens if, instead of summing them, we apply a low degree polynomial function to them? • In Gaussian space, what is the body of a given volume with the smallest boundary? • What are the balanced boolean function whose value is most likely to be the same when evaluated at two random correlated inputs? • What conditions on the Fourier spectrum of a function of several variables imply that the function essentially depends on only few variables? • With what distortion is it possible to embed various classes of finite metric spaces into L1? Every year, at the AMS Joint Mathematics Meeting, there is a “Current Events” session, with four talks on new and exciting developments in math. Some of the talks are given by speakers who are knowledgeable about the development they are talking about, but unrelated to it. I think of this as an excellent idea, and in fact I have sometimes fantasized about a theory seminar in which, as a rule, the speakers can only talk about other people’s work. (This has remained a fantasy because it would be too hard to recruit the speakers.) Indeed, at the Berkeley theory lunch, I used to encourage such talks, and I rarely spoke about my own work. This is all to say that at the 2011 meeting one talk will be about Unique Games (according to my calculations, this means that computer science theory accounts for 1/4 of what’s new and exciting in math, yay!) and I have been invited to deliver it. In the next three weeks, I am supposed to submit a picture (or a description of it, which will be realized by the AMS graphic designers) that illustrates the topic of my talk. What picture would you use? I am thinking of a drawing of the Khot-Vishnoi graph or a drawing of the graph used to reduce unique games to Max Cut. A face picture of Subhash would also make sense, but I don’t think it’s what they are looking for. (See here for the graphics accompanying past talks.) Suggestions? I was at a conference last week, at which I heard two pieces of mathematical gossip. One was that Arora, Barak and Steurer have developed an algorithm that, given a Unique Games in which a $1-\epsilon$ fraction of constraints are satisfiable, finds an assignment satisfying a constant fraction of constraints in time $2^{n^{poly(\epsilon)}}$. This is now officially announced in an earlier (public) paper by Arora, Impagliazzo, Matthews and Steurer, which presented a slower algorithm running in time $2^{\alpha n}$ where $\alpha = exp(-1/\epsilon)$. I suppose that the next targets are now the approximation problems for which the only known hardness is via unique games. Is there a subexponential algorithm achieving $\log\log n$ approximation for sparsest cut or $1.99$ approximation for Vertex Cover? The other news is on the Polynomial Ruzsa-Freiman conjecture, one of the main open problems in additive combinatorics. Apologies in advance to readers if I get some details wrong. In the special case of $\mathbb{F}_2$, the conjecture is that if $F: \mathbb{F}_2^n \rightarrow \mathbb{F}_2^m$ is any function such that $\mathbb{P}_{x,y,z} [ F(x) + F(y) + F(z)= F(x+y+z) ] \geq \epsilon$ then there is a matrix $M$ and a vector $b$ such that $\mathbb{P}_{x} [ F(x) = Mx + b ] \geq \epsilon^{O(1)}$ where the probability in the conclusion is independent of $n,m$ and is polynomial in $\epsilon$. Various proofs were known achieving a bound of $exp(-poly(1/\epsilon)$. The first proof, due to Samorodnitsky achieves, I believe, a bound of $exp(-O(1/\epsilon^2))$, while the results from this paper should imply a bound of $exp(-\tilde O(1/\epsilon))$ where we use the notation $\tilde O(n) := O(n \log n)$. At the conference, Ben Green announced a result of Schoen implying a subexponential bound of $1/2^{2^{\sqrt{\log 1/\epsilon}}}$. The proof begins with the standard step of applying a theorem of Ruzsa to find a subset $A\subseteq \mathbb{F}_2^n$ such that $|A| \geq \epsilon^{O(1)} 2^n$, and $F$ on $A$ is a “Freiman 16-homomorphism,” meaning that for every 32 elements of $A$ such that $a_1 + \cdots + a_{16} = a_{17} + \cdots + a_{32}$ we have $F(a_1) + \cdots + F(a_{16}) = F(a_{17}) + \cdots + F(a_{32})$ The choice of 16 is just whatever makes the rest of the proof work. The theorem of Ruzsa works for any arbitrarily large constant. Then we consider the set $B := 8A$ of all the elements that can written as $a_1 + \cdots + a_8$ with $a_i \in A$, and we define a function $F'$ on $8A$ by setting $F'(a_1 + \cdots + a_8) := F(a_1) + \cdots + F(a_8)$ which is a well-posed definition because $F$ is a Freiman 16-homomorphism. (It would have been sufficient if $F$ had been an 8-homomorphism.) Note that for every $b_1,b_2,b_3,b_4 \in B$ such that $b_1 + b_2 = b_3 + b_4$ we have $F'(b_1) + F'(b_2) = F'(b_3) + F'(b_4)$ Now the result of Schoen implies that if $A$ is a subset of $\mathbb {F}_2^n$ of size $2^n/K$, then there is a subspace $V$ of dimension $n - 2^{O(\sqrt {\log K})}$ such that $V$ is entirely contained in $8A$. Previously, it was known how to get a subspace of dimension $n - \tilde O(K)$, by adapting a technique of Chang (see this survey paper). Note that $F'$ is now a linear map on $V$, and that it agrees with $F$ on $V$. (I cannot reconstruct how this last step follows, however — maybe that claim is that $F$ agrees with $F'$ on a $poly(\epsilon)$ fraction of elements of $V$? ) It now just remains to extend, arbitrarily, $F'$ to a linear map over the entire space. In 2004 I wrote a survey on hardness of approximation as a book chapter for a book on approximation algorithm. I have just prepared a revised version for a new edition of the book. While it would have been preferable to rethink the organization and start from scratch, because of time constraints I was only able to add sections on integrality gaps and on unique games, and to add references to more recent work (e.g. the combinatorial proof of the PCP theorem, the new 2-query PCPs, the tight results on minimizing congestion in directed networks, and so on). If your (or your friend’s) important results are not cited, do let me know. The deadline to submit the chapter has long passed, but the threats from the editor haven’t yet escalated to the point where I feel that I have to submit it or else.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 56, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7826473712921143, "perplexity": 373.4863952185409}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663372.35/warc/CC-MAIN-20140930004103-00377-ip-10-234-18-248.ec2.internal.warc.gz"}
https://socratic.org/questions/what-is-the-net-ionic-equation-of-2h-so-4-2-ca-2-2i-caso-4-2h-2i
Chemistry Topics # What is the net ionic equation of 2H^+ + SO_4^(2-) + Ca^(2+) + 2I^- -> CaSO_4 + 2H^+ + 2I^-? Mar 25, 2017 Simply cancel out the ions that are COMMON as reactants and products...... #### Explanation: $\cancel{2 {H}^{+}} + S {O}_{4}^{2 -} + C {a}^{2 +} + \cancel{2 {I}^{-}} \rightarrow C a S {O}_{4} \left(s\right) \downarrow + \cancel{2 {H}^{+}} + \cancel{2 {I}^{-}}$ to give....... $C {a}^{2 +} + S {O}_{4}^{2 -} \rightarrow C a S {O}_{4} \left(s\right) \downarrow$ And the questions you've gots to be asking yourself are: $\text{(i) Is mass balanced?}$ $\text{(ii) Is charge balanced?}$ Well are they? If they are not, then you have not made a reasonable representation of chemical reality. $C a S {O}_{4}$ is reasonably insoluble in aqueous solution, and it precipitates from solution as a white solid. ##### Impact of this question 1411 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3260630667209625, "perplexity": 5584.602215465151}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305494.6/warc/CC-MAIN-20220128104113-20220128134113-00293.warc.gz"}
http://www.mathworks.com/help/stats/poisscdf.html?requestedDomain=www.mathworks.com&nocookie=true
# Documentation ### This is machine translation Translated by Mouseover text to see original. Click the button below to return to the English verison of the page. # poisscdf Poisson cumulative distribution function ## Syntax p = poisscdf(x,lambda)p = poisscdf(x,lambda,'upper') ## Description p = poisscdf(x,lambda) returns the Poisson cdf at each value in x using the corresponding mean parameters in lambda. x and lambda can be vectors, matrices, or multidimensional arrays that have the same size. A scalar input is expanded to a constant array with the same dimensions as the other input. The parameters in lambda must be positive. p = poisscdf(x,lambda,'upper') returns the complement of the Poisson cdf at each value in x, using an algorithm that more accurately computes the extreme upper tail probabilities. The Poisson cdf is $p=F\left(x|\lambda \right)={e}^{-\lambda }\sum _{i=0}^{floor\left(x\right)}\frac{{\lambda }^{i}}{i!}$ ## Examples collapse all For example, consider a Quality Assurance department that performs random tests of individual hard disks. Their policy is to shut down the manufacturing process if an inspector finds more than four bad sectors on a disk. What is the probability of shutting down the process if the mean number of bad sectors ( ) is two? probability = 1-poisscdf(4,2) probability = 0.0527 About 5% of the time, a normally functioning manufacturing process produces more than four flaws on a hard disk. Suppose the average number of flaws ( ) increases to four. What is the probability of finding fewer than five flaws on a hard drive? probability = poisscdf(4,4) probability = 0.6288 This means that this faulty manufacturing process continues to operate after this first inspection almost 63% of the time.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5769737362861633, "perplexity": 1468.4663164329772}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102967.65/warc/CC-MAIN-20170817053725-20170817073725-00662.warc.gz"}
https://iksinc.online/2015/04/13/words-as-vectors/?replytocom=214
# Words as Vectors Vector space model is well known in information retrieval where each document is represented as a vector. The vector components represent weights or importance of each word in the document. The similarity between two documents is computed using the cosine similarity measure. Although the idea of using vector representation for words also has been around for some time, the interest in word embedding, techniques that map words to vectors, has been soaring recently. One driver for this has been Tomáš Mikolov’s Word2vec algorithm which uses a large amount of text to create high-dimensional (50 to 300 dimensional) representations of words capturing relationships between words unaided by external annotations. Such representation seems to capture many linguistic regularities. For example, it yields a vector approximating the representation for vec(‘Rome’) as a result of the vector operation vec(‘Paris’) – vec(‘France’) + vec(‘Italy’). Word2vec uses a single hidden layer, fully connected neural network as shown below. The neurons in the hidden layer are all linear neurons. The input layer is set to have as many neurons as there are words in the vocabulary for training. The hidden layer size is set to the dimensionality of the resulting word vectors. The size of the output layer is same as the input layer. Thus, assuming that the vocabulary for learning word vectors consists of V words and N to be the dimension of word vectors, the input to hidden layer connections can be represented by matrix WI of size VxN with each row representing a vocabulary word. In same way, the connections from hidden layer to output layer can be described by matrix WO of size NxV. In this case, each column of WO matrix represents a word from the given vocabulary. The input to the network is encoded using “1-out of -V” representation meaning that only one input line is set to one and rest of the input lines are set to zero. To get a better handle on how Word2vec works, consider the training corpus having the following sentences: “the dog saw a cat”, “the dog chased the cat”, “the cat climbed a tree” The corpus vocabulary has eight words. Once ordered alphabetically, each word can be referenced by its index. For this example, our neural network will have eight input neurons and eight output neurons. Let us assume that we decide to use three neurons in the hidden layer. This means that WI and WO will be 8×3 and 3×8 matrices, respectively. Before training begins, these matrices are initialized to small random values as is usual in neural network training. Just for the illustration sake, let us assume WI and WO to be initialized to the following values: WI = W0 = Suppose we want the network to learn relationship between the words “cat” and “climbed”. That is, the network should show a high probability for “climbed” when “cat” is inputted to the network. In word embedding terminology, the word “cat” is referred as the context word and the word “climbed” is referred as the target word. In this case, the input vector X will be [0 1 0 0 0 0 0 0]t. Notice that only the second component of the vector is 1. This is because the input word is “cat” which is holding number two position in sorted list of corpus words. Given that the target word is “climbed”, the target vector will look like [0 0 0 1 0 0 0 0 ]t. With the input vector representing “cat”, the output at the hidden layer neurons can be computed as Ht = XtWI = [-0.490796 -0.229903 0.065460] It should not surprise us that the vector H of hidden neuron outputs mimics the weights of the second row of WI matrix because of 1-out-of-V representation. So the function of the input to hidden layer connections is basically to copy the input word vector to hidden layer. Carrying out similar manipulations for hidden to output layer, the activation vector for output layer neurons can be written as HtWO = [0.100934  -0.309331  -0.122361  -0.151399   0.143463  -0.051262  -0.079686   0.112928] Since the goal is produce probabilities for words in the output layer,  Pr(wordk|wordcontext) for k = 1, V, to reflect their next word relationship with the context word at input, we need the sum of neuron outputs in the output layer to add to one. Word2vec achieves this by converting activation values of output layer neurons to probabilities using the softmax function. Thus, the output of the k-th neuron is computed by the following expression where activation(n) represents the activation value of the n-th output layer neuron: Thus, the probabilities for eight words in the corpus are: 0.143073   0.094925   0.114441   0.111166   0.149289   0.122874   0.119431   0.144800 The probability in bold is for the chosen target word “climbed”. Given the target vector [0 0 0 1 0 0 0 0 ]t, the error vector for the output layer is easily computed by subtracting the probability vector from the target vector. Once the error is known, the weights in the matrices WO and WI can be updated using backpropagation. Thus, the training can proceed by presenting different context-target words pair from the corpus. In essence, this is how Word2vec learns relationships between words and in the process develops vector representations for words in the corpus. Continuous Bag of Words (CBOW) Learning The above description and architecture is meant for learning relationships between pair of words. In the continuous bag of words model, context is represented by multiple words for a given target words. For example, we could use “cat” and “tree” as context words for “climbed” as the target word. This calls for a modification to the neural network architecture. The modification, shown below, consists of replicating the input to hidden layer connections C times, the number of context words, and adding a divide by C operation in the hidden layer neurons. [An alert reader pointed that the figure below might lead some readers to think that CBOW learning uses several input matrices. It is not so. It is the same matrix, WI, that is receiving multiple input vectors representing different context words] With the above configuration to specify C context words, each word being coded using 1-out-of-V representation means that the hidden layer output is the average of word vectors corresponding to context words at input. The output layer remains the same and the training is done in the manner discussed above. Skip-Gram Model Skip-gram model reverses the use of target and context words. In this case, the target word is fed at the input, the hidden layer remains the same, and the output layer of the neural network is replicated multiple times to accommodate the chosen number of context words. Taking the example of “cat” and “tree” as context words and “climbed” as the target word, the input vector in the skim-gram model would be  [0 0 0 1 0 0 0 0 ]t, while the two output layers would have [0 1 0 0 0 0 0 0] t and [0 0 0 0 0 0 0 1 ]t as target vectors respectively. In place of producing one vector of probabilities, two such vectors would be produced for the current example. The error vector for each output layer is produced in the manner as discussed above. However, the error vectors from all output layers are summed up to adjust the weights via backpropagation. This ensures that weight matrix WO for each output layer remains identical all through training. In above, I have tried to present a simplistic view of Word2vec. In practice, there are many other details that are important to achieve training in a reasonable amount of time. At this point, one may ask the following questions: 1. Are there other methods for generating vector representations of words? The answer is yes and I will be describing another method in my next post. 2. What are some of the uses/advantages of words as vectors. Again, I plan to answer it soon in my coming posts. ## 34 thoughts on “Words as Vectors” 1. Panuwat Assawinjaipetch says: Hi, first of all, I would like to apologize you for disturbing on this. I am very new to machine learning and NLP field and I found that your example here are very useful for my practice as I can keep tracking my computation to see if I go correctly or not for a computing step. If possible, and not troubling you much, could you explain further about how can we calculate the backpropagation of Softmax function after we get an error? Again, I’m sorry for troubling you and would be very grateful if you could help me understand this til the end. Thank you. (So my code that following your calculation was stop at Error = Y – probability which is equal to [-0.14307333, -0.0949255, -0.11444132, 0.8888341, -0.14928925, -0.12287422, -0.11943087, -0.14479961]) Like 1. Backpropagation algorithm is pretty well known and it is a part of most packages for machine learning. Please check any book on machine learning to see details of backpropagation. Let me know if you still have a difficulty. Thanks for visiting my blog. Like 2. azy says: I want to try to implement word2vec to Vietnamase language, but I’m confused about the pre-trained vectors, when I tried to use in the English language I use Google News-vectors-negative300.bin.gz (about 3.4GB) for pre-trained vectors and it works good. if i do with vietnam language should I make the data pre-trained vectors themselves ?? how to make a pre-trained vectors such as Google News-vectors-negative300.bin.gz, then I try to convert Google News-vectors-negative300.bin to text format the result as: 3000000 300 0.001129 -0.000896 0.000319 0.001534 0.001106 -0.001404 -0.000031 -0.000420 -0.000576 0.001076 -0.001022 -0.000618 -0.000755 0.001404 -0.001640 -0.000633 0.001633 -0.001007 -0.001266 0.000652 -0.000416 -0.001076 0.001526 -0.000275 0.000140 0.001572 0.001358 -0.000832 -0.001404 0.001579 0.000254 -0.000732 -0.000105 -0.001167 0.001579 how to change a letter or word into the form above ?? Like 3. Karla Rocio Vargas Godoy says: Thanks for this great explanation. I’ve been really looking for something like that, i’m not ANN expert but this post made me understand at least!! 🙂 Like 4. Mohammad Sulaiman Khan says: Hello. I have used CBOW(n-gram) model with scikit-learn package for classify Bangla Lanuage data. But i need implementation of skip gram on my thesis project which scikit-learn don’t have. After searching google and different blog i come to know that Word2Vec has implementation of skip gram. That’s where i am confused . Do word2vec use skip gram model or CBOW model? if use skip gram can you provide me any resource ??? it will be a great help for me. Thanks in advance Like 5. Jeremy says: Hi, may I know how did u calculate to get these values please? I understand this part: Ht = XtWI = [-0.490796 -0.229903 0.065460] HtWO = [0.100934 -0.309331 -0.122361 -0.151399 0.143463 -0.051262 -0.079686 0.112928] but from here how do i derived to here: Thus, the probabilities for eight words in the corpus are: 0.143073 0.094925 0.114441 0.111166 0.149289 0.122874 0.119431 0.144800 Like 1. The prob. are calculated by taking the ratio of the output of every output node with the sum of all output nodes outputs. This is shown in the post just above the probability values via the use of softmax function. Like 6. jhon says: I tried to calculate the result softmax and got a different result. where is the mistake? 0.100934 + -0.309331 + -0.122361 + -0.151399 + 0.143463 + -0.051262 + -0.079686 + 0.112928 = -0.356714 0.100934 / -0.356714 = -0.282954972 -0.309331 / -0.356714 = 0.867168095 -0.122361 / -0.356714 = 0.343022702 -0.151399 / -0.356714 = 0.424426852 0.143463 / -0.356714 = -0.402179337 -0.051262 / -0.356714 = 0.143706162 -0.079686 / -0.356714 = 0.223389046 0.112928 / -0.356714 = -0.316578548 Thanks.., Like 1. You are not performing exponentian. That is why you are getting different numbers as well positive and negative numbers. With exponentian as given in the formula in the post, all numbers will be positive and within 0-1. Like 7. Alex says: Hi Krishan, very nice explanation of word2vec procedure 🙂 but still I cannot understand what is the final word representation vector? is it the vector of probabilities over dictionary? if not, than how one can extract the final result? Thanks for answering 🙂 Like 1. The final word representation vectors are read from WI matrix at the end of the training. Each row gives a word vector. Thus, the i-th row will provide vector representation for the i-th word in the dictionary. Like 1. Ajay Jadhav says: But for this case you will get 3 values(probabilities) for each word. Do we need to add them ? If yes, then value in front of Climbed word should be high. is it ? Like 2. Each output layer neuron gives one number. Softmax function converts these numbers to probabilities, one for each word. Like 8. liam says: Hi Krishan, Very intuitive explanation. Thanks. I have watched Stanford DP for NLP (http://web.stanford.edu/class/cs224n/syllabus.html) course, which explain the word2vec by modeling the probability of observing a word knowing the context $$p(o|c) = \exp(u_o^Tv_c)/(\sum_{w=1}^Wexp(u_w^Tv_c)$$ Here it stresses we should use 2 vectors to represent a word. I don’t know why we need to use this model until I see your post. The 2 vectors are actually hidden units and weight vectors from hidden unit to output layer. Best, Like 9. George says: Very nice post! “That is, the network should show a high probability for “climbed” when “cat” is inputted to the network. ” this should be from the corpus, in another words, from training data. So it should be used to calculate the error? however i didnt see you use this to calculate the error. the whole procedure above should be the same if we dunt have the corpus but just a vocabulary with the eight words? thx and look fwd ur reply. Like 10. It is being used to calculate error. Pl. take a look at “The probability in bold is for the chosen target word “climbed”. Given the target vector [0 0 0 1 0 0 0 0 ]t, the error vector for the output layer is easily computed by subtracting the probability vector from the target vector. Once the error is known, the weights in the matrices WO and WI can be updated using backpropagation.” Only thing is that I haven’t detailed this part of the process. Like 11. sivashankari says: Dear Sir, I am Sivashankari. I am able to understand theory concepts. But I am unable to do the evaluation of my understanding level with any concrete example. Is there any solved example with the dataset( 10 sentences). Like 12. Good article. I’d like to point out one thing about the figure in CBOW section which has possibility to mislead readers. You indicated many WI matric in the figure per one context word. The figure seems to describe there are different WI matric to be trained for each input word. However, WI is the final target matrix (word embeddings) which we ultimately want to get. Strictly speaking it would be ONE WI matrix which is connected to many context word input vectors and many hidden layer neurons for better understanding. Please let me know if you have different opinions. Thanks Like 1. Thanks for liking the article. All WI matrices are identical. There is no separate index to different matrices. Also, the article states “The modification, shown below, consists of replicating the input to hidden layer connections C times, the number of context words, and adding a divide by C operation in the hidden layer neurons.”, which implies identical WI matrices. Like 1. I am not talking about your knowledge or how correctly you understand the model. (Please Read the comment once again.) I am talking about the figure itself. As mentioned in the comment, the figure could mislead readers since you *implied identical WI matrices* in the article. I thought that this blog is for education/sharing purpose. If so, the article should provide knowledge as clear as possible for others. In terms of that, this article has a room for improvement. However, from your reply you are just saying that you know what I know. It’s your choice whether you improve the quality of article. Thanks. Like 13. This is the best word2vec article that I have read! Like 14. Martin says: Hallo Krishan, i did not understand how you get the the activation vector for output layer neurons HtWO = [0.100934 -0.309331 -0.122361 -0.151399 0.143463 -0.051262 -0.079686 0.112928] Can you please explain this step? Thanks Like 1. Hidden layer output (H) is calculated earlier. The weight matrix W0 is known/defined earlier. So the output is simply a product between the transpose of the hidden layer output and W0. Like 15. Ahmed Khan says: Thank you, nice article. Please help how you get 8 words out of those 3 sentences. Did you take out the stop words. Like
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6294817328453064, "perplexity": 942.2752637708631}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738905.62/warc/CC-MAIN-20200812141756-20200812171756-00161.warc.gz"}
https://pure.ulster.ac.uk/en/publications/characterisation-of-pore-structure-development-of-alkali-activate
# Characterisation of pore structure development of alkali-activated slag cement during early hydration using electrical responses X Zhu, Z Zhang, K Yang, Bryan Magee, Y Wang, L Yu, S Nanukuttan, Q Li, S Mu, C Yang, M Basheer Research output: Contribution to journalArticlepeer-review 35 Citations (Scopus) ## Abstract This paper describes the results of a study investigating early age changes in pore structure of alkali-activated slag cement (AASC)-based paste. Capillary porosity, pore solution electrical conductivity and electrical resistivity of hardened paste samples were examined and the tortuosity determined using Archie’s law. X-ray computed micro-tomography (X-ray μCT) and Scanning electron microscope (SEM) analysis were also carried out to explain conclusions based on electrical resistivity measurements. AASC pastes with 0.35 and 0.50 water-binder ratios (w/b) were tested at 3, 7, 14 and 28 days and benchmarked against Portland cement (PC) controls. Results indicated that for a given w/b, the electrical resistivity and capillary porosity of the AASC paste were lower than that of the PC control, whilst an opposite trend was observed for the pore solution conductivity, which is due to AASC paste’s significantly higher ionic concentration. Further, capillary pores in AASC paste were found to be less tortuous than that in the PC control according to estimations using Archie’s law and from the results of X-ray μCT and SEM analysis. In order to achieve comparable levels of tortuosity, therefore, AASC-based materials are likely to require longer periods of curing. The work confirms that the electrical resistivity measurement offers an effective way to investigate pore structure changes in AASC-based materials, despite threshold values differing significantly from PC controls due to intrinsic differences in pore solution composition and microstructure. Original language English 139-149 11 Cement and Concrete Composites 89 6 Mar 2018 https://doi.org/10.1016/j.cemconcomp.2018.02.016 Published (in print/issue) - 31 May 2018 ## Keywords • Pore tortuosity • alkali-activated slag cement • pore solution • electrical resistivity • capillary porosity • X-ray μCT ## Fingerprint Dive into the research topics of 'Characterisation of pore structure development of alkali-activated slag cement during early hydration using electrical responses'. Together they form a unique fingerprint.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8407670855522156, "perplexity": 11999.948376153854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711121.31/warc/CC-MAIN-20221206225143-20221207015143-00806.warc.gz"}
https://wiki.seg.org/index.php?title=Frequency&oldid=173877
# Frequency (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) Other languages: English • ‎español Series Geophysical References Series Digital Imaging and Deconvolution: The ABCs of Seismic Exploration and Processing Enders A. Robinson and Sven Treitel 4 http://dx.doi.org/10.1190/1.9781560801610 9781560801481 SEG Online Store Why is the use of sines and cosines of the time variable t relevant to the theory of digital filtering? The reason is that sines and cosines let us define frequency, and the concept of frequency is basic to what we mean by a filter, be it digital or analog. Let us think about a rotating wheel, as on an old-fashioned wagon or stagecoach. Consider, for example, a wheel that is rotating at a rate of 10 times a second. If we consider one spoke on this wheel to be a reference vector, this vector makes 10 complete rotations in every second. Each rotation through ${\displaystyle {\rm {36}}0^{\rm {o}}}$ (which is ${\displaystyle {\rm {2}}\pi }$ radians) represents one cycle, so we say that the vector has a cyclic frequency of 10 cycles per second. The word hertz (Hz) stands for cycles per second, so alternatively, we say that the vector has a frequency of 10 Hz. Up to this point, we have not specified in which direction the vector is rotating. As a matter of mathematical convention, we say that the vector has a frequency of +10 Hz if it is rotating in the counterclockwise direction, whereas it has a frequency of –10 Hz if it is rotating in the clockwise direction. The frequency customarily is denoted by the symbol f. We talk about negative frequencies, but do we encounter them in seismic work? In the seismic case, only real-valued signals are recorded. Therefore, a positive frequency cannot be distinguished from its negative counterpart. For example, a frequency of 10 Hz is indistinguishable from a frequency of –10 Hz. Hence, only positive frequencies need be considered in ordinary circumstances. Period is a general term used to denote an interval of time, but the word period has special meaning in frequency analysis. By the period of a repeating process, we mean the shortest time interval for which the process exactly repeats itself. In our example of the rotating vector, the motion exactly repeats itself 10 times a second, so we say that it has a period ${\displaystyle T{\rm {=}}{\rm {1/10s}}}$. What does this have to do with the subject of frequency? The period T and the frequency f are reciprocals of each other, that is, ${\displaystyle T=1/f}$ and ${\displaystyle f=1/T}$. As we have seen, hertz is a measure of cyclic frequency, that is, a measure of the number of cycles per second. However, there is yet another type of frequency: the angular frequency ${\displaystyle \omega }$, which is measured in radians per second. Angular frequency is related to cyclic frequency f by the fundamental relation {\displaystyle {\begin{aligned}&\omega {\ =2}\pi f\end{aligned}}} (4) In general, the word frequency can be used for either ${\displaystyle \omega }$ or f, so one often must determine from the context which frequency is meant. Usually, if the word frequency occurs by itself, cyclic frequency in units of hertz is implied. The ${\displaystyle {\rm {2}}\pi }$ factor in the above formula comes from the fact that there are ${\displaystyle {\rm {2}}\pi }$ radians (or ${\displaystyle 360^{\circ }}$) in each complete rotation.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 11, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9786227941513062, "perplexity": 420.40946527228704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945472.93/warc/CC-MAIN-20230326111045-20230326141045-00266.warc.gz"}
https://www.groundai.com/project/morphometric-analysis-in-gamma-ray-astronomy-using-minkowski-functionals-iii-sensitivity-increase-via-a-refined-structure-quantification/
Morphometric analysis in gamma-ray astronomyusing Minkowski functionals: # Morphometric analysis in gamma-ray astronomy using Minkowski functionals: III. Sensitivity increase via a refined structure quantification M. A. Klatt    K. Mecke ###### Key Words.: Methods: data analysis – Methods: statistical – Techniques: image processing – Gamma rays: diffuse background 11institutetext: Karlsruhe Institute of Technology (KIT), Institute of Stochastics, Englerstr. 2, 76131 Karlsruhe, Germany 22institutetext: Institut für Theoretische Physik, Universität Erlangen-Nürnberg, Staudtstr. 7, 91058 Erlangen, Germany 33institutetext: Erlangen Centre for Astroparticle Physics, Universität Erlangen-Nürnberg, Erwin-Rommel-Str. 1, 91058 Erlangen, Germany ###### Abstract Context: Aims: We pursue a novel morphometric analysis to detect sources in very-high-energy gamma-ray counts maps by structural deviations from the background noise without assuming any prior knowledge about potential sources. The rich and complex structure of the background noise is characterized by Minkowski functionals from integral geometry. By extracting more information out of the same data, we aim for an increased sensitivity. Methods: In the first two papers, we derived accurate estimates of the joint distribution of all Minkowski functionals. Here, we use this detailed structure characterization to detect structural deviations from the background noise in a null hypothesis test. We compare the analysis of the same simulated data with either a single or all Minkowski functionals. Results: The joint structure quantification can detect formerly undetected sources. We show how the additional shape information leads to the increase in sensitivity. We explain the very unique concepts and possibilites of our analysis compared to a standard counting method in gamma-ray astronomy, and we present in an outlook further improvements especially for the detection of diffuse background radiation and generalizations of our technique. Conclusions: ## 1 Morphometric source detection in gamma-ray astronomy The morphometric analysis quantifies the complex structural information that is contained in the background noise in ground-based Very-High Energy (VHE) gamma-ray astronomy. We thus quantify the shape of sky maps without any assumption about potential sources (Klatt et al. 2012; Göring et al. 2013). The Minkowski functionals from integral geometry can comprehensively and robustly quantify the complex shape provided by spatial data (Schneider & Weil 2008; Mantz et al. 2008; Schröder-Turk et al. 2010, 2011, 2013). They allow for an efficient data analysis and a sensitive hypothesis test. Because of their versatility, they cannot only check for specific structures or arrangements but can detect any structural deviation from the expected background. In other words, by characterizing the shape of a noisy sky-map more information can be taken out of the same data without assuming prior knowledge about the source. This is in contrast to the common null hypothesis test by Li & Ma (1983) that only uses the total number of counts but no further geometric information, which might be especially valuable for the analysis of extended sources or diffuse VHE emissions Aharonian et al. (2006a, 2007, b). It is also in contrast to full likelihood fits of models to the measured data of high-energy gamma-ray telescopes (Mattox et al. 1996; Atwood et al. 2009), which strongly depend on the model and on the a-priori knowledge about the sources. The methods and concepts in this article are in principle applicable to any random field and any spatial data to detect inhomogeneities or other structural deviations. It could, for example, be interesting for medical data sets, e.g., in tumor recognition (Canuto et al. 2009; Larkin et al. 2014; Arfelli et al. 2000; Michel et al. 2013), for geospatial data and raster data in earth science (Stonebraker et al. 1993), in image and video analysis, where a fast analysis for the detection of objects is needed (Borg et al. 2005; Yilmaz et al. 2006; Quast & Kaup 2011), or in the related field of pattern recognition (Jain et al. 2000; Theodoridis & Koutroumbas 2009). However, the technique is especially interesting for very high energy (VHE) gamma-ray astronomy, where faint extended signals are overlaid by strong background noise (Buckley et al. 2008). Especially, for short observation times and low statistics, that is, when an increase in sensitivity is most needed, the advantage of additional structural information should be most effective: the excess in the number of counts might not be significant because of the strong fluctuations of a Poisson distribution relative to the small mean value. However, the improbable spatial arrangement of the fluctuations can eventually lead to the detection of the source. In astronomy, the Minkowski functionals are already used as probes of non-Gaussianity in the cosmic microwave background (Schmalzing et al. 1999; Novikov et al. 2000; Gay et al. 2012; Ducout et al. 2013; Novaes et al. 2014), to characterize nuclear matter in supernova explosions (Schuetrumpf et al. 2013, 2015), and to investigate the large-scale structure of the universe (Mecke et al. 1994; Colombi et al. 2000; M. Kerscher et al. 2001; Kerscher et al. 2001; Wiegand et al. 2014). In the first two papers of this series, we introduced the morphometric analysis to gamma-ray astronomy and derived accurate estimates of the structure distribution of the background. Here, we apply this refined shape analysis to simulated data and demonstrate how the additional geometric information can lead to a strong increase in sensitivity. In Sec. 2, we shortly summarize the most important steps of the morphometric analysis. In Sec. 3, we demonstrate the increase in sensitivity via a refined structure quantification. By analyzing the same data simultaneously by all three Minkowski functionals instead of a simple structure characterization, the compatibility with the background structure can decrease by 14 orders of magnitude, that is, the probability to find such a fluctuation in the background is instead of . Formerly undetected sources can thus eventually be detected, which depends of course on the shape of the source whether there is a structural deviation or not. The morphometric analysis is then compared in Section 4 to a standard null hypothesis test in gamma-ray astronomy. The comparison depends both on the shape of the source and on the experimental details. An advantage of the morphometric analysis is that it is rather independent of the size of the scan window. Moreover, we discuss an example for which there is no significant excess in the total number of counts, but the source can still be detected because of the additional structural information. Section 5 contains a summary of the resuls and a conclusion. An outlook to further possible extensions is presented in Sec. 6. In the appendix A, we introduce a new test statistic, which combines different thresholds. This allows, for example, for a better detection of diffuse radiation111Parts of this article are from the PhD thesis of one of the authors (Klatt 2016).. ## 2 The statistical significance of structural deviations In the first paper of this series, we explained in detail the structure characterization of a counts map itself using Minkowski functionals. Moreover, we rigorously defined the null hypothesis test that (globally) detects statistical siginficant deviations in the background noise. The most important steps where shortly repeated in the second paper. For a better readability, we also here repeat the definition of the test statistics. The counts map is first turned into a black-and-white image by thresholding, that is, all pixels with a number of counts larger or equal a given threshold are set to black (otherwise white). The Minkowski functionals of the resulting two-dimensional black-and-white image are given by the area , perimeter , and Euler characteristic of the black pixels (Schröder-Turk et al. 2011). The last functional is a topological quantity. It is given by the number of clusters minus the number of holes. Given a measured count map, we compute for each threshold a triplet of Minkowski functionals. The null hypothesis is that there are only background signals, that is, we assume that all events are randomly, independently, and homogeneously distributed over the field of view. The first two papers demonstrated how detector effects that distort the homogeneity can be corrected for. The number of counts follow a Poisson distribution and they are independent for different bins of equal size. Their expectation is the background intensity. For each given threshold , the probability distribution of the Minkowski functionals can be determined under the null hypothesis that there are only background signals, as explained in the second paper of this series. The joint probability distribution of the Minkowski functionals sensitively characterizes the “shape of the background noise”. It allows to define a null hypothesis that detects statistical siginficant deviations from the background morphology. Following Neyman & Pearson (1933), we defined the compatibility of a measured triplet with the null hypothesis: C(A,P,χ) =∑P(Ai,Pi,χi)≤P(A,P,χ)P(Ai,Pi,χi). (1) For convenience, we then defined the deviation strength as the logarithm of this likelihood value: D(A,P,χ):= −log10C(A,P,χ); (2) the larger the deviation strength, the larger is the statsitical significance of the structural deviation from the background intensity, more precisely. We reject the null hypothesis of a pure background measurement if the deviation strength is larger than , which corresponds to the common criterion of a deviation. ## 3 Sensitivity increase via structure characterization We demonstrated that the morphometric analysis is a promising and innovative spatial data analysis in the first paper of this series, where we also discussed in an outlook its potentials and advantages. We achieved an accurate characterization of the complex structure of the background noise in the second paper. Here, we will show how our refined shape analysis can indeed lead to a strong increase in sensitivity. The advanced hypothesis test based on all three Minkowski functionals can detect sources that remain undetected if only a single functional is used. The deviation strength w.r.t. only the area is the following called “simple deviation strength”, while the deviation strength w.r.t. the complete characterization via all three Minkowski functionals is called “joint deviation strength”. ### 3.1 Examples of Minkowski sky maps In order to compare the simple to the joint deviation strength, we define a test pattern including sources of different sizes and different integrated fluxes and thus simulate count maps, see Fig. 1. The count map is both analyzed by a Minkowski sky map of the simple and the joint deviation strength, see Figs. 1 and 1, respectively. Note that the sources can be chosen much weaker than in the test pattern in the first paper of this series. This is because the sliding window uses more statistics than the small sliding windows. Therefore, weaker sources can be detected against strong background noise. To demonstrate that the increased sensitivity is not a coincidence of a single fluctuation but a true grain in sensitivity due to the additional structure information, we plot the average of 100 Minkowski sky maps based on different simulations, see Figs. 1 and 1. The joint deviation strengths are on average distinctly larger than the simple deviation strengths analyzing the very same data. The simple deviation strength can be expected to be significant only for the strongest sources. However, using all Minkowski functionals to characterize the structure of the counts maps, i.e., extracting more information from the same data, all sources are detected (where the faint sources are, of course, in single simulations sometimes detected or not depending on statistical fluctuations). This confirms the initial idea to improve the sensitivity via an improved structure characterization. The testpatter in Fig. 1 also reveals another advantage of the morphometric analysis. Differently large sources can be detected with the same scan window size in contrast to the standard counting method. This is discussed in more detail in Section 4. In an outlook based on time-consuming calculations of the DoS of a , we found that the sensitivity increase gets even stronger with further increasing window sizes. If necessary, the analysis can be extended to such window sizes. ### 3.2 Systematic analysis of increase in sensitivity The test pattern in Fig. 1 demonstrates that using the joint structure characterization allows for detecting formerly undetected sources by taking additional morphometric information into account. Of course, an increase in sensitivity can only be achieved if there actually is an additional nontrivial shape information within the scan-window, more precisely, if the shape of the source is structured on the length scale of the sliding window. If any morphometric approach is to analyze a completely uniform offset in the background intensity, i.e., a Poisson random field with a different intensity , the result must be less significant than for a simple method based only the total number of counts. This is simply because the additional structural information is in perfect accordance with the background model and the only difference is a different total number of counts. In Fig. 1, the simple and joint deviation strengths are systematically compared to each other for differently shaped sources: 1. a true point source which is smaller than a pixel, 2. a uniform offset in the background intensity, 3. and a Gaussian shaped source. Between 0.75 and 7.5 million counts maps () are simulated using the same intensity profile but different integrated fluxes. For each count map, both the simple and the joint deviation strength are determined. Given a simple deviation strength , the conditional frequency of the joint deviation strength is determined, i.e., for all cells with a simple deviation strength the empirical probability density function of the joint deviation strength is computed. The result is plotted using a color scale in Fig. 1. The black diagonal line indicates what would be equal values of simple and joint deviation strength. The vertical and horizontal black lines depict the null hypothesis criterion for the simple or joint deviation strength, respectively. Not even the strongest true point source in these simulations would have been detected by a simple counting method because the source signals are suppressed by the large additional background. In contrast to this, even the simple deviation strength uses additional information the dependence on the threshold and can thus, e.g., detect a single pixel with an exceptional high number of counts because of a unlikely black pixel at very high thresholds. This advantage in being more independent on the system size is discussed in more detail below in Section 4. However, comparing to , there is no additional information in the perimeter or Euler characteristic : If there is only a single black pixel at high thresholds, the only possible values for and are 4 or 1, respectively. Therefore, are the simple and joint deviation strengths are exactly identical. For the uniform source, the additional information () must, as stated above, lead to a decrease of the deviation strength. Interestingly, this decrease turns out to be rather small even in the extreme case of a constant offset. The decrease of the deviation strength could even be an advantage in that the joint deviation strength is slightly less sensitive to errors in the estimation of the background intensity and a source would still be detected because of the strong deviation in the area . For the structured source, there is a tremendous increase in sensitivity for the joint deviation strength compared to the simple one. For all counts maps for which the corresponding values of the deviation strengths are within the dashed box, the source is not detected if only the area characterizes the structure, but it is detected by the joint deviation strength, i.e., if all Minkowski functionals characterize the shape of the counts map. For the same counts map for which the simple deviation strength based only on the area is below 5, i.e., the compatibility is more than , the joint deviation strength reaches values nearly 20, i.e., with compatibilities less than . In other words, if the structure is characterized not only by the area but by all Minkowski functionals, the compatibility with the background structure can drop by 14 orders of magnitude. There is no significant excess in the total number of counts but in the structure of the counts map. Only by taking this morphometric information into account, a formerly undetected source can now be detected using the same data. A more formal explanation of this intuitive understanding can be given with the aid of Fig. 3. Given a measured area, e.g., , the compatibility w.r.t. only the area is the sum over all probabilities left of the left dotted line and right of the right dotted line, i.e., all macrostates with an area less likely than the given area . In the example of a uniform offset in the background intensity, the structure, quantified by the perimeter222Because of the white boundary conditions the perimeter can only take on even values; for an odd value of the probability is zero. For an easier visualization the bin length in is two. , is in agreement with the background structure and the perimeter most likely takes on a “typical” value, i.e., a likely perimeter for a given area , e.g., the green square represents . The compatibility is then the sum over all probabilities outside the inner contour line, which results in . However, in the example of a structured source the perimeter might take on an unlikely value for the perimeter, e.g., the blue square represents . The compatibility is now the sum over all probabilities outside the outer contour line and thus, . A structural deviation from the background structure leads to a more significant result of the morphometric analysis compared to simple counting methods. For two different b/w images with the same compatibility with the background w.r.t. the area, the additional information of the perimeter specifies whether the b/w image is indeed compatible to the background structure or not. ## 4 Comparison to a standard counting method The standard null hypothesis test in gamma-ray astronomy was introduced in Li & Ma (1983): it compares the number of signals in the so-called “on-region”, i.e., in the vicinity of an expected source, to the number of background signals detected in an “off-region”, i.e., a region in the sky without sources. The method simply counts the number of photons. Given an exposure ratio , the significance is σ= ⎷ln{[1+αα(NonNon+Noff)]2Non[(1+α)(NoffNon+Noff)]2Noff}. (3) The significance can be expressed in terms of a deviation strength (Göring et al. 2013): D(σ) (4) where is the error function. This standard counting method allows for a simple and fast null hypothesis test. However, the analysis only uses the total number of counts in the observation window. Our morphometric analysis follows a completely new ansatz based on structure characterization, which can not only detect an excess of counts but is able to detect any structural deviation from the background noise. It can detect structural deviations in count maps where the number of expected counts is in perfect agreement with the background intensity (Klatt 2016). ### 4.1 Dependence on experimental details Therefore, there is no direct and straightforward comparison that one of the methods is always more sensitive than the other. A comparison of the advantages and different possibilities of the two methods is complicated and depends on the experimental details and the source shape. The standard counting method is more likely to detect sources if there is no interesting structure to be quantified within the scan window. The additional structural information is in agreement with the background noise, as discussed in Section 3.2. The morphometric analysis based on all Minkowski functionals is, in this case, less likely to detect a source compared to an analysis that only takes the total number of counts into account. However, if there is a distinct structural difference from the background noise, the morphometric analysis has the advantage of being able to use this additional information. Whether or not there is an increase in sensitivity compared to the counting method by Li and Ma also strongly depends on the experimental details, e.g., the bin size, as shortly discussed in Göring (2012). In contrast to the total number of counts, our morphometric analysis strongly depends on the choice of the bin size. If the bins are too large, interesting source structure might be hidden because it is contained in a single bin. However, if the bin size is too small, the sky map does not only get very noisy but we can also loose structural information. In the extreme case that in the black and white image all black pixels are separated from each other by white pixels, the translation invariant Minkowski functionals can no longer distinguish different configurations, but only the total number of black bins. Therefore, the bin size needs to be chosen reasonably taking the size of the scan window, the point spread function of the telescope, the quality of the data, and the source shape into account, see Göring (2012). As mentioned above, the morphometric analysis can detect sources even if there is no excess in the signals compared to the background intensity, see Section 3.2. Therefore, we expect the morphometric analysis to be robust against overestimates of the background intensity . However, a more thorough analysis of such effects for real data is beyond the scope of this article. Ideally, it would need extensive simulations to determine the empirical cumulative distribution function with a priori unknown , but instead using an efficient estimation of the background intensity, as described in Göring (2012). A main advantage of the morphometric analysis compared to the counting method could eventually be that it avoids observations of off-region because a less precise estimate of the background intensity is sufficient. The method by Li and Ma compares the number of counts in the source region and in regions with only background signals. Obviously, it strongly depends on how accurate the estimate of the background intensity is. Here, we compare the morphometric analysis to the significance of the Li and Ma test for the extreme and most sensitive case of an infinitely long observation of the off-region333An infinite observation time corresponds to using the exact background intensity. while . In this limit, σ=√2{Nonln[Nonλtot]+λtot−Non}1/2. (5) For a final comparison of both methods, their dependencies on these experimental details must be accurately studied, which is beyond the scope of this article. Here, we discuss the advantages of the refined morphometric analysis and compare it to the counting method in two examples, where the morphometric analysis detects sources in contrast to the standard method by Li and Ma. These examples show the potential of the morphometric ansatz. However, the choice of the optimal method strongly depends on the details of the data which is to be analyzed, as well as, on the information or features that are to be extracted. ### 4.2 Scan window size dependence and low statistics Besides the above mentioned robustness against overestimated background intensities or the ability to even detect inhomogeneities with no excess in the total number of counts, another important advantage of the morphometric analysis is that it depends much less on the choice of the size of the scan window. Even sources with extensions much smaller than the scan window size are detected in the morphometric analysis, although there is no significant change in the total number of counts. Figure 2 shows how even a point source, i.e., a single pixel with increased intensity, can sensitively be detected, because at high threshold even a single black pixel is very unlikely, which is independent of counts in other pixels possibly in agreement with the null hypothesis. The morphometric analysis can detect sources of very different extensions with the same scan window size. In contrast to this, the standard counting method cannot detect a source if there are at the same time too many pixels with only background signals in the same scan window. If there are many pixels within the scan window which contain only background signals, the source signals can be suppressed. The small excess in the total number of counts is no longer significant. This effect can be reduced if the size of the scan window is adjusted to the extension of the source. However, such an adaption can possibly lead to a biased choice of the parameter or an unknown trial factor if the analysis is repeated with different window size. In Fig. 4, the test pattern from Fig. 1 with differently large sources is analyzed using the same scan window size as the morphometric analysis, see Figs. 11. The large outer sources are of the same size as the scan window; they are detected with a similar significance as by the morphometric analysis. However, the smaller inner sources are not statistically significantly detected, because there are too many background signals in the same scan window. If the scan window size is adjusted, these sources can be detected highly significantly. However, the problem of a possibly biased choice of parameters, as well as, an unknown trial factor remains. The most important advantage of the morphometric analysis is, of course, that it incorporates additional structural information. Especially, if only low statistics are available, an increase in the sensitivity by quantifying the structure of the counts map is probably most needed. For example, a slight excess in the total number of counts might not be significant because of the strong fluctuations of a Poisson distribution relative to the small mean value. Interestingly, especially in such a case the significance in the structural deviation is relatively strong compared to the significance in the number of counts. For example, for a Poisson random field with a low intensity the clustering of a given number of black pixels is equally likely or unlikely as in a field with high intensity. The excess in the number of counts might not be significant, but the improbable arrangement of the black pixels can lead to the detection of the source. So, the advantage of additional structural information should be most effective when it is most needed. Figure 5 shows such an example of a weak and hardly detectable source where only low statistics are available. Although hardly visible by eye in the count map, there are strong intensity gradients. The source resembles a dotted pattern; it consists of several nearly pointlike sources that are marked in the count map on the left-hand side. Because of the strong intensity gradients and thus the significant structural deviation from the background noise, the morphometric analysis can take advantage of the additional geometrical information. Because of the low statistics, there are strong statistical fluctuations in the single sky maps. We therefore do here not use the maximal deviation strength over all thresholds, but an improved test statistic that combines the deviations strength of different thresholds, as explained in the Appendix A. For a more systematic analysis, I simulate 400 samples and compare the deviation strength of the morphometric analysis to of the standard counting method from Eqs. (4) and (5), see the right-hand side of Fig. 5. The morphometric analysis analyzes the whole count map. However, it is not only compared to of the standard counting method using the same scan window size (blue points), rather for all scan windows down to very small sizes (red points). Thus, the effect of windows size dependence in the standard counting method can be taken into account. For scan windows smaller than the total size of the counts map, we iterate the scan window over the count map and compare the maximum of all deviation strengths to . Although the thus necessary trial factors for are ignored, which in some cases would reduce the deviation strength by more the , the morphometric analysis is for the vast majority of samples more sensitive than the counting method . While there are only about 5 out of 400 samples with , there are many samples where the source is not detected by but by . In other words, although there might be no significant excess in the total number of counts, the source can still be detected by taking more information out of the same data. In general, a comparison of the morphometric analysis and the standard method by Li and Ma depends on both the experimental details and the structure of the source. For this example, the morphometric analysis is more sensitive. ## 5 Conclusion The morphometric analysis allows to detect gamma-ray sources via structural deviations from the background noise without any assumptions about potential sources. Comparing the simple to the joint structure characterization, we can demonstrate a significant increase in sensitivity due to the additional shape information that is extracted from the data, see Fig. 1. For the same counts map for which the simple deviation strength based only on the area is below 5, i.e., the compatibility is more than , the joint deviation strength can reach values of nearly 20, i.e., compatibilities less than , see Fig. 2. The compatibility with the background structure drops by 14 orders of magnitude and formerly undetected sources can be detected simply by applying a refined morphometric analysis. Of course, the increase in sensitivity depends on the shape of the source, see Fig. 3. A comparison of the morphometric analysis to the standard null hypothesis test by Li and Ma in gamma-ray astronomy, see Eqs. (4) and (5), depends both on the shape of the source and on the experimental details, like the binning or the accuracy of the estimate of the background intensity. The morphometric analysis follows a very different ansatz. Besides the advantage of including additional structural information, it depends less on the size of the scan window and can detect both rather extended and pointlike sources using the same scan window size, compare Figs. 1 and 4. Moreover, the advantage of additional structural information should be most effective for short observation times and low statistics. Figure 5 shows an example for which there is no significant excess in the total number of counts, but the source can still be detected because of the additional structural information. In summary, the main advantages of the morphometric analysis are: • a shape analysis without prior knowledge about potential sources, • a sensitivity gain via structure information especially for low statistics, • its relative independence of the scan window size, and • its detection of statistically significant inhomogeneities in the counts map even if the expected total number of counts is in perfect agreement with the background intensity. Moreover, we expect for the reasons discussed above that it is robust against errors in the estimation of the background intensity. The simulation study in this article, demonstrates how additional information extracted from the same data can allow the detection of formerly undetected sources. The next step is to apply these improved techniques to real data from experiments; for first examples of applications to H.E.S.S. sky maps, see Göring (2008, 2012); Klatt (2016). ## 6 Outlook The morphometric analysis is here shown to be an innovative and efficient spatial data analysis. Of course, there are even further possibilities to extend the analysis. For example, the method can naturally be extended to any spatial dimension if the structure of the dimensional b/w image is characterized by the Minkowski functionals. ### 6.1 Other shape descriptors The above-defined analysis is very general, and indeed any useful shape descriptor can be used. The Minkowski functionals are versatile tools and can comprehensively quantify the structure of quite different random fields. However, if for a certain system another index is better physically motivate, more important or interesting, it can replace the Minkowski functionals, but the basic idea remains unchanged. Only the probability distribution, i.e., the DoS has to be determined following the procedure from the second paper in this series. The Minkowski functionals already incorporate all additive and conditional continuous scalar geometrical information, and we have already also introduced the Minkowski tensors to the morphometric analysis in Klatt (2010). Other shape descriptors like the convexity number (Stoyan et al. 1987) or Betti numbers (Robins 2002) are directly applicable to the method describe above, but the calculations will be more time consuming. Even functions can be used as shape characteristics. A first example could be the cluster function that is the probability of finding two points at a given distance in the same cluster. Only one additional step is needed: the correlation function must be mapped to a scalar. A good choice would be the integral over the absolute value of the difference to the mean value (i.e., expected) correlation function of a Poisson random field; this is well defined if there are no long-range correlations, because then all correlation functions converge sufficiently fast to the same constant and thus, the difference to zero. An even more interesting example could be correlation functions of Minkowski functionals (Mecke et al. 1994; Klatt & Torquato 2014). ### 6.2 Further random fields We have developed the morphometric analysis for analyzing counts map in gamma-ray astronomy, where the counts in different pixels are uncorrelated because they result from different events (showers) clearly separated in time. We here show how the morphometric analysis distinguishes homogeneous from inhomogeneous Poisson random fields. However, the morphometric analysis allows for a much more general analysis. It can detect other deviations from a Poisson assumption, e.g., correlations between the counts in different pixels; see Section 6.2. There might be no deviation in the number of counts (globally or even locally), but a strong deviation in the structure quantified by the Minkowski functionals. Although we have developed the morphometric analysis for analyzing counts map with uncorrelated pixels, its use might even be more efficient for other applications with correlations between the counts in different pixels, for example, in detectors where an event is simultaneously triggered in neighboring pixels. Note also, that the concept of the morphometric analysis can immediately be extended to other random fields, e.g., the Boolean models or the Gaussian random field. The Minkowski functionals are, as mentioned above, already used to search for statistical significant deviations from a Gaussian random field in the cosmic microwave background; for example, see (Schmalzing et al. 1999; Gay et al. 2012; Ducout et al. 2013). Of course, the probability distributions of the Minkowski functionals need to be determined and probably only numerical estimates are possible. ### 6.3 Extensions of the test statistic In the appendix, we introduce a new test statistic combining different thresholds, see Eq. (7). Instead of the maximum of the deviation strength, we use the sum of the deviation strengths over all thresholds, see Eq. (6). We determine the empirical complementary cumulative distribution function, see Fig. 6. The combination of the structural information at different thresholds improves especially the detection of diffuse radiation and extended sources, see Fig. 8. ###### Acknowledgements. We thank Christian Stegmann and Daniel Göring for valuable discussions, suggestions, and advice. We thank the German Research Foundation (DFG) for the Grant No. ME1361/11 awarded as part of the DFG-Forschergruppe FOR 1548 “Geometry and Physics of Spatial Random Systems.” ## Appendix A Combining different thresholds So far, only the deviation strength of a single threshold is directly used for the null hypothesis test. The deviation strengths at other thresholds are used only indirectly by the fact that they are smaller than the maximum. However, the deviation strength as a function of the thresholds contains a lot of information, e.g., even to some degree the extension of the source (Göring et al. 2013, Fig. 3). Taking this information into account could yield a profound additional insight in the spatial data. However, it is currently out of reach to determine the probability distribution of the deviation strength as a function of the threshold. Nevertheless, the most important information is whether the maximal deviation strength is only a fluctuation at a single threshold or whether there are strong structural deviations over a large range of thresholds (Göring et al. 2013, Fig. 3). This can be quantified by replacing the maximum of the deviation strength by the sum of all thresholds444The infinity norm is replaced by the 1-norm ., S:=∞∑ρ=0D(ρ), (6) which is well defined, i.e., , because for every counts map there is a maximum count ; thus, for the b/w image is completely white. For a large enough threshold , this becomes the most likely configuration, because if , . For , the compatibility is one and the deviation strength zero; the series, defined in Eq. (6), is actually a finite sum. The sum is a new test statistic instead of the maximum deviation strength before. The distribution of this new test statistic can impossibly be calculated analytically, but efficient and tight approximations might be achievable, although out of the scope of this article. Here, the cumulative distribution is determined numerically, and the sensitivity gain is shown for simulated data. ### a.1 Empirical cumulative distributions We simulate counts maps and for each calculate , from which I derive the Empirical Probability Density Function (EPDF) for different window sizes and background intensities. Figure 6 shows the EPDF for a Poisson counts map with intensity . The new test statistic is defined as the empirical complementary cumulative distribution function (ECCDF), i.e., given a measured sum of deviation strength , the probability to find a larger value . This definition follows, as that for the compatibility in Section 2, the scheme given in Neyman & Pearson (1933) to construct a most efficient hypothesis test. T(S)=−log10∫∞Sdsf(S), (7) and the null hypothesis is rejected if , which corresponds to the common criterion. Figure 6 plots Eq. (7) for different system sizes and background intensities, based on simulated count maps for each system. As expected, the test statistic strongly depends on the background intensity because the number of thresholds with nonzero deviation strength varies. Interestingly, the dependence on the system size is rather weak. The new test statistic is chosen such that in simulations of the background model, its EPDF is the same as for the deviation strength at a single threshold, f(T)=ln10⋅10−T, (8) see Fig. 7. ### a.2 Sensitivity increase for diffuse radiation Especially for broad sources, which exhibit structural deviations at a large range of thresholds, the new test statistic can lead to an additional increase in sensitivity. Figure 8 exemplarily shows this increase for simulated diffusion radiation. The new testing procedure is more sensitive because it includes the information that there are also strong structural deviations at other thresholds than the maximum deviation strength. ## References • Aharonian et al. (2006a) Aharonian, F. et al. 2006a, Astron. Astrophys., 449, 223 • Aharonian et al. (2006b) Aharonian, F. et al. 2006b, Nature, 439, 695 • Aharonian et al. (2007) Aharonian, F. et al. 2007, Astrophys. J., 661, 236 • Arfelli et al. (2000) Arfelli, F. et al. 2000, Radiology, 215, 286 • Atwood et al. (2009) Atwood, W. B. et al. 2009, Astrophys. J., 697, 1071 • Borg et al. (2005) Borg, M., Thirde, D., Ferryman, J., et al. 2005, in Advanced Video and Signal Based Surveillance, 2005. AVSS 2005. IEEE Conference on, 16 • Buckley et al. (2008) Buckley, J., Byrum, K., Dingus, B., et al. 2008, ArXiv e-prints, arXiv:0810.0444 • Canuto et al. (2009) Canuto, H. C., McLachlan, C., Kettunen, M. I., et al. 2009, Magn. Reson. Med., 61, 1218 • Colombi et al. (2000) Colombi, S., Pogosyan, D., & Souradeep, T. 2000, Phys. Rev. Lett., 85, 5515 • Ducout et al. (2013) Ducout, A., Bouchet, F. R., Colombi, S., Pogosyan, D., & Prunet, S. 2013, Mon. Not. R. Astron. Soc., 429, 2104 • Gay et al. (2012) Gay, C., Pichon, C., & Pogosyan, D. 2012, Phys. Rev. D, 85, 023011 • Göring (2008) Göring, D. 2008, Master’s thesis (Diplomarbeit), Universität Erlangen-Nürnberg • Göring (2012) Göring, D. 2012, Gamma-Ray Astronomy Data Analysis Framework based on the Quantification of Background Morphologies using Minkowski Tensors, PhD thesis, Universität Erlangen-Nürnberg • Göring et al. (2013) Göring, D., Klatt, M. A., Stegmann, C., & Mecke, K. 2013, Astron. Astrophys., 555, A38 • Jain et al. (2000) Jain, A. K., Duin, R. P. W., & Mao, J. 2000, IEEE T. Pattern Anal., 22, 4 • Kerscher et al. (2001) Kerscher, M., Mecke, K., Schmalzing, J., et al. 2001, Astron. Astrophys., 373, 1 • Klatt (2010) Klatt, M. A. 2010, Master’s thesis (Diplomarbeit), Universität Erlangen-Nürnberg • Klatt (2016) Klatt, M. A. 2016, Morphometry of random spatial structures in physics, PhD thesis, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU). • Klatt et al. (2012) Klatt, M. A., Göring, D., Stegmann, C., & Mecke, K. 2012, AIP Conf. Proc., 1505, 737 • Klatt & Torquato (2014) Klatt, M. A. & Torquato, S. 2014, Phys. Rev. E, 90, 052120 • Larkin et al. (2014) Larkin, T. J., Canuto, H. C., Kettunen, M. I., et al. 2014, Magn. Reson. Med., 71, 402 • Li & Ma (1983) Li, T.-P. & Ma, Y.-Q. 1983, Astrophys. J., 272, 317 • M. Kerscher et al. (2001) M. Kerscher, K. Mecke, P. Schuecker, et al. 2001, Astron. Astrophys., 377, 1 • Mantz et al. (2008) Mantz, H., Jacobs, K., & Mecke, K. 2008, J. Stat. Mech., 12, P12015 • Mattox et al. (1996) Mattox, J. R. et al. 1996, Astrophys. J., 461, 396 • Mecke et al. (1994) Mecke, K., Buchert, T., & Wagner, H. 1994, Astron. Astrophys., 288, 697 • Michel et al. (2013) Michel, T. et al. 2013, Phys. Med. Biol., 58, 2713 • Neyman & Pearson (1933) Neyman, J. & Pearson, E. S. 1933, Phil. Trans. R. Soc. London, Ser. A, 231, 289 • Novaes et al. (2014) Novaes, C., Bernui, A., Ferreira, I., & Wuensche, C. 2014, J. Cosmol. Astropart. Phys., 2014, 018 • Novikov et al. (2000) Novikov, D., Schmalzing, J., & Mukhanov, V. F. 2000, Astron. Astrophys., 364, 17 • Quast & Kaup (2011) Quast, K. & Kaup, A. 2011, EURASIP J. Image Video Process., 2011, 14 • Robins (2002) Robins, V. 2002, in Lecture Notes in Physics, Vol. 600, Morphology of Condensed Matter, ed. K. Mecke & D. Stoyan (Springer, Berlin, Heidelberg), 261 • Schmalzing et al. (1999) Schmalzing, J., Buchert, T., Melott, A. L., et al. 1999, Astrophys. J., 526, 568 • Schneider & Weil (2008) Schneider, R. & Weil, W. 2008, Stochastic and Integral Geometry (Probability and Its Applications) (Berlin: Springer) • Schröder-Turk et al. (2010) Schröder-Turk, G. E., Kapfer, S., Breidenbach, B., Beisbart, C., & Mecke, K. 2010, J. Microsc., 238, 57 • Schröder-Turk et al. (2011) Schröder-Turk, G. E., Mickel, W., Kapfer, S. C., et al. 2011, Adv. Mater., 23, 2535 • Schröder-Turk et al. (2013) Schröder-Turk, G. E., Mickel, W., Kapfer, S. C., et al. 2013, New J. Phys., 15, 083028 • Schuetrumpf et al. (2013) Schuetrumpf, B., Klatt, M. A., Iida, K., et al. 2013, Phys. Rev. C, 87, 055805 • Schuetrumpf et al. (2015) Schuetrumpf, B., Klatt, M. A., Iida, K., et al. 2015, Phys. Rev. C, 91, 025801 • Stonebraker et al. (1993) Stonebraker, M., Frew, J., Gardels, K., & Meredith, J. 1993, SIGMOD Rec., 22, 2 • Stoyan et al. (1987) Stoyan, D., Kendall, W., & Mecke, J. 1987, Stochastic geometry and its applications (John Wiley and Sons) • Theodoridis & Koutroumbas (2009) Theodoridis, S. & Koutroumbas, K. 2009, Pattern Recognition, 4th edn. (Boston: Academic Press) • Wiegand et al. (2014) Wiegand, A., Buchert, T., & Ostermann, M. 2014, Mon. Not. R. Astron. Soc., 443, 241 • Yilmaz et al. (2006) Yilmaz, A., Javed, O., & Shah, M. 2006, ACM Comput. Surv., 38 You are adding the first comment! How to quickly get a good reply: • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made. • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements. • Your comment should inspire ideas to flow and help the author improves the paper. The better we are at sharing our knowledge with each other, the faster we move forward. The feedback must be of minimum 40 characters and the title a minimum of 5 characters
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8493517637252808, "perplexity": 1209.2561132998967}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655887377.70/warc/CC-MAIN-20200705152852-20200705182852-00414.warc.gz"}
https://hughjidiette.wordpress.com/2016/08/16/william-lane-craig-on-does-the-vastness-of-the-universe-support-naturalism/
## William Lane Craig on Does the Vastness of the Universe Support Naturalism? If a small universe is evidence for theism, is a vast universe evidence for atheism? I want to consider Craig’s reply to this, but before I do that I should introduce some basic concepts about the symmetry of evidence; that is, the possibility for evidence for entails the possibility of evidence against. On likelihoodism observation O is evidence for hypothesis H over ¬H iff P(O|H) > P(O|¬H). Since P(O|H) + P(¬O|H) = 1 and P(O|¬H) + P(¬O|¬H) = 1, we can insert it into the prior formula to get an interesting result: • 1 – P(¬O|H) > 1 – P(¬O|¬H) • P(¬O|¬H) > P(¬O|H) So, in English, O is evidence for H over ¬H iff ¬O is evidence for ¬H over H. The means that you can have evidence for a hypothesis iff you can have evidence against a hypothesis. Craig applies this principle to an example: David Manley was making the point that on the cozy, pre-Copernican cosmology—what C. S. Lewis called “the discarded image” of the cosmos—theism seemed vastly more probable than atheism. Like a Fabergé egg, the little universe centered on the Earth, with the spheres of the planets and fixed stars revolving about it, cried out for an explanation in terms of a Cosmic Designer. But if you agree that theism is more likely than atheism on such a view, then, Manley argued, you must also agree that a vast cosmos, such as we observe, counts against God’s existence. Here we see Manley arguing that if a certain observation (small universe) supports theism, then ¬observation (vast universe) supports atheism. Craig agrees to this, but he softens the blow by saying: … the degree to which the vastness of the universe increases the probability of atheism is marginal! It scarcely changes the odds at all. So while the smallness of the universe would greatly increase the probability of theism, the vastness of the universe only negligibly increases the probability of atheism. Let’s see how he derives this conclusion. He starts with the odds form of Bayes’ theorem, which says that the ratio of posteriors = ratio of priors x ratio of likelihoods. $\frac{P(Theism|Small Universe)}{P(Atheism|Small Universe)} = \frac{P(Theism)}{P(Atheism)} \times \frac{P(Small Universe|Theism)}{P(Small Universe|Atheism)}$ Here Craig picks his numbers: Suppose we say that P(Small Universe|Theism) = .01 and P(Small Universe | Atheism) = .0001. That reflects our conviction that given a small, pre-Copernican universe, God’s existence is much more probable than atheism. This assumes that the prior or intrinsic probability of theism or atheism is exactly the same; otherwise Manley’s argument collapses. So we’ll just assume for the sake of argument that P(Theism) = 0.5. Craig is pointing out that if the prior probability of atheism is something small like .01 (making theism’s prior .99), then evidence from a vast universe will get drowned out anyway for our posterior probability of atheism. For the sake of argument, we’ll assume the priors for atheism and theism are both 0.5. Plugging in Craig’s numbers it turns out the posterior ratios are $\frac{P(Theism|Small Universe)}{P(Atheism|Small Universe)} = \frac{.5}{.5} \times \frac{.01}{.0001} = 100$ $\frac{P(Theism|Vast Universe)}{P(Atheism|Vast Universe)} = \frac{.5}{.5} \times \frac{.99}{.9999} = .990099$ So the end result is that given a small universe, theism is 100 times more probable than atheism; and given a vast universe, atheism is only slightly more probable. Craig is a happy man. The reason this happens is because of the suspect initial numbers Craig plugs in. Why does Craig estimate P(Small Universe | Theism) = .01 and P(Small Universe | Atheism) = .0001? Those numbers look really low given the initial intuition about pre-Copernican cosmology, the Fabergé egg. That Pre-Copernican intuition should be reflected by P(Small Universe|Theism) > P(Vast Universe|Theism), which means there’s a greater than 50% chance of a small universe given Theism. Craig hints at why he chose such low numbers: Now I’ve read enough of the philosophical and scientific literature on fine-tuning to know that the vastness of the cosmos is not really surprising on theism. For example, John Barrow and Frank Tipler in their important book The Anthropic Cosmological Principle (Oxford University Press, 1985) emphasize that the size and age of the universe are just what we should expect to observe. For the carbon that makes up our bodies was synthesized in the interior of stars and then distributed throughout the universe via supernovae. It takes aeons for galaxies of stars to form and even more time for the carbon requisite for life to be spread abroad to become the foundation of biological life. No other element could substitute for carbon in this role. So the universe must be as old as it is for life to exist and, hence, as big as it is, since the universe is in a state of cosmic expansion since its inception in the Big Bang 13.7 billion years ago. So the size (my italics) and age of the universe are just what one ought to expect given the fine-tuning of the initial conditions of the universe (my italics), which, many have argued, is best explained through design. It seems he’s using the fine-tuned constants, the initial conditions, and the laws (though unstated) as background knowledge. So, given that, that’s why a vast universe has such high probability on theism, according to Craig. But if we’re using that as background knowledge we need to use it for Atheism too, in which case the background knowledge does all the work and P(Vast Universe|Theism+k) = P(Vast Universe|Atheism+k) ≈ 1. The only reason the two probabilities would be different was if God performed miracles to affect the size of the universe to overrule what would naturally happen.  If the background knowledge makes the two probabilities equal then we can’t compare the hypotheses and we need to choose different background knowledge. The intuition Manley was getting at was that, in the pre-Copernican era, “like a Fabergé egg, the little universe centered on the Earth, with the spheres of the planets and fixed stars revolving about it, cried out for an explanation in terms of a Cosmic Designer.” Let’s find more reasonable numbers for our imagined pre-Copernican:  P(Small Universe|Theism) = .8 and P(Small Universe|Atheism) = .1. Our pre-Copernican expects a small universe given theism and a vast universe given atheism. $\frac{P(Theism|Small Universe)}{P(Atheism|Small Universe)} = \frac{.5}{.5} \times \frac{.8}{.1} = 8$ $\frac{P(Theism|Vast Universe)}{P(Atheism|Vast Universe)} = \frac{.5}{.5} \times \frac{.2}{.9} = 2/9$ A small universe would be significant evidence for Theism and a vast universe would be significant evidence for Atheism. When evaluating the evidence, there are two questions: 1) the qualitative question of which hypothesis the evidence points to, and 2) the quantitative question of how strong the evidence is. The numbers I plugged in were subjective probabilities and will differ depending on your own personal God and atheist universe theory.  How strong the evidence is will depend on your subjective theories. I think the numbers Craig plugged in completely misconstrued the dialectic as Manley presented it, since on Craig’s numbers the pre-Copernican theist expects a vast universe! No wonder a vast universe is only negligible evidence for atheism. The main point of this post is not so much about finding the exact numbers to plug in, but that the possibility of evidence for, entails the possibility of evidence against, a point which Craig now seems to agree with. For some practice, let’s see how this works for other examples. • If successful prayer experiments are evidence for God, then unsuccessful prayer experiments are evidence against God. If unsuccessful prayer experiments aren’t evidence against God, then successful prayer experiments aren’t evidence for God. (I’ll leave out the contrapositive hereafter.) • If divine hiddenness isn’t evidence against God, then divine appearance isn’t evidence for God. (Surely an odd result. There must be something wrong with the antecedent.) • If suffering is evidence against God, then non-suffering (happiness) is evidence for God. (Notice that atheists who find the problem of evil persuasive has to admit that happiness is evidence for God.) • If fine-tuned constants is evidence for God, then coarse-tuned (wide-ranging) constants is evidence against God. • If finding intermediate fossils is evidence for common ancestry, then not finding intermediate fossils is evidence against common ancestry. The above examples only speak on the qualitative (or binary) nature of evidence. The quantitative aspect will depend on your priors and particular theory. If you’re wondering about the quantitative aspect for any of these examples, plug in your own numbers to the odds form of Bayes’ like I did in the vast universe example. Elliott Sober explains in his paper (p. 16) that the reason why not finding an intermediate fossil is negligible compared to the support gained from finding an intermediate fossil is because of the low probability of finding a fossil. (He goes through the math in the paper.) Evolutionists need not worry. This entry was posted in Philosophy of Religion and tagged , . Bookmark the permalink. ### One Response to William Lane Craig on Does the Vastness of the Universe Support Naturalism? 1. phillip halper says: Hi Hugh Interesting stuff and will be very useful in upcoming debate. We have now fixed a date and will record two shows, one on sep 8th with Jeff Zweerink and one on Sep 9th with mAx Andrews. The first will be on the multiverse and the second will be on God and the multiverse.I dont know the broadcast dates yet. I’m currently overseas but will be back shortly, so if you have any time for Skype chat beforehand let me know. Cheers Phil ________________________________
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 5, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8075442314147949, "perplexity": 1950.8146305793261}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323864.76/warc/CC-MAIN-20170629033356-20170629053356-00189.warc.gz"}
https://www.usgs.gov/media/images/k-lauea-lerz-fissures-and-flows-may-14-230-pm
# Kīlauea LERZ Fissures and Flows, May 14 2:30 p.m. ## Detailed Description Map as of 2:30 p.m. HST, May 14, shows the location of fissure 17, which opened yesterday morning at approximately 4:30 a.m. HST., and the area covered by an ‘A‘ā flow since then. The flow front as of 2:30 p.m. is shown by the small red circle with label. The flow is following well a path of steepest descent (blue line), immediately south of the 1955 ‘A‘ā flow boundary. Shaded purple areas indicate lava flows erupted in 1840, 1955, 1960, and 2014-2015. ## Details Image Dimensions: 908 x 702 Date Taken:
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8426523208618164, "perplexity": 5885.262448427157}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347435987.85/warc/CC-MAIN-20200603175139-20200603205139-00075.warc.gz"}
https://devopstales.github.io/kubernetes/openshift-ldap/
# Openshift LDAP authentication Configure Openshift Cluster to use LDAP as a user backend for login with Ansible-openshift ### Parst of the Openshift series In the last post I used the basic htpasswd authentication method for the installatipn. But I can use Ansible-openshift to configure an LDAP backed at the install for the authentication. ### Environment 192.168.1.40 deployer 192.168.1.41 openshift01 # master node 192.168.1.42 openshift02 # infra node 192.168.1.43 openshift03 # worker node With Ansible-openshift you can not change the authetication method after Install !! If you installed the cluster with htpasswd, then change to LDAP the playbook trys to add a second authentication methot for the config. It is forbidden to add a second type of identity provider in the version 3.11 of Ansible-openshift so choose wisely. ### Configurate Installer # deployer nano /etc/ansible/ansible.cfg # use HTPasswd for authentication # deployer
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22474485635757446, "perplexity": 29157.709097337996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151638.93/warc/CC-MAIN-20210725045638-20210725075638-00422.warc.gz"}
http://openstudy.com/updates/5131b7a3e4b04b50148db129
## Got Homework? ### Connect with other students for help. It's a free community. • across Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: 55 members online • 0 viewing ## arun8408 Group Title The series a1,a2,… is a geometric progression. If a4=80 and a5=160, what is the value of a1? one year ago one year ago Edit Question Delete Cancel Submit • This Question is Closed 1. agent0smith Group Title Best Response You've already chosen the best response. 1 $\large a _{n} = a _{1} r ^{n-1}$ is the formula for a geometric progression, r being the common ratio, and a1 being the first term, an being the n-th term. r is the common ratio, so if you divide a5 by a4, you'll find r. Then you can use $\large a _{4} = 80 = a _{1} r ^{4-1}$ to find a1. • one year ago 2. ParthKohli Group Title Best Response You've already chosen the best response. 0 $\dfrac{160}{80} = r$ • one year ago 3. AravindG Group Title Best Response You've already chosen the best response. 0 nth term of a GP= ar^{n-1} use this and write the 5th and 4th terms ,you can solve them • one year ago 4. ParthKohli Group Title Best Response You've already chosen the best response. 0 $r = 2$You can now easily determine the first term by continuous division. • one year ago 5. agent0smith Group Title Best Response You've already chosen the best response. 1 You can find a1 by either just dividing by r until you reach a1, or using $\large a _{4} = 80 = a _{1} r ^{4-1}$ $\large a _{5} = 160 = a _{1} r ^{5-1}$ • one year ago 6. arun8408 Group Title Best Response You've already chosen the best response. 0 wat value do i substitute for r? • one year ago 7. agent0smith Group Title Best Response You've already chosen the best response. 1 $\large r = \frac{ a _{5} }{ a _{4} }$ • one year ago 8. arun8408 Group Title Best Response You've already chosen the best response. 0 a1 . r^3 = a1 . 16?? • one year ago 9. agent0smith Group Title Best Response You've already chosen the best response. 1 r should be 2, 160/80 = 2 $\large a _{4} = 80 = a _{1} * 2 ^{4-1}$ $\large 80 = a _{1}*2 ^{3}$ • one year ago 10. arun8408 Group Title Best Response You've already chosen the best response. 0 ya.. i got it.. 10 is d ans.. thank u so much • one year ago 11. arun8408 Group Title Best Response You've already chosen the best response. 0 What comes next in this sequence: 1,2,6,24,120,−−? • one year ago 12. agent0smith Group Title Best Response You've already chosen the best response. 1 There's a pattern: first term: 1x1 = 1 second term: 2x1 = 2 third term: 3x2 = 6 fourth term: 4x6 = 24 fifth term: 5x24 = 120 sixth term: ? x120 = ? Look at the pattern of multiples on the left. • one year ago • Attachments: ## See more questions >>> ##### spraguer (Moderator) 5→ View Detailed Profile 23 • Teamwork 19 Teammate • Problem Solving 19 Hero • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9963803291320801, "perplexity": 12761.960016675805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510273676.54/warc/CC-MAIN-20140728011753-00063-ip-10-146-231-18.ec2.internal.warc.gz"}
https://me.gateoverflow.in/527/gate2016-1-6
GATE2016-1-6 0 votes A rigid ball of weight $100$ $N$ is suspended with the help of a string. The ball is pulled by a horizontal force $F$ such that the string makes an angle of $30^o$ with the vertical. The magnitude of force $F$ (in $N$) is __________ recategorized Answer: Related questions 0 votes 0 answers A $300$ $mm$ thick slab is being cold rolled using roll of $600$ $mm$ diameter. If the coefficient of friction is $0.08$, the maximum possible reduction (in $mm$) is __________ 0 votes 0 answers A hypothetical engineering stress-strain curve shown in the figure has three straight lines $PQ, QR, RS$ with coordinates P$(0,0)$, Q$(0.2,100)$, R$(0.6,140)$ and S$(0.8,130)$. $'Q'$ is the yield point, $'R'$ is the UTS point and $'S'$ the fracture point. The toughness of the material (in $MJ/m^3$) is __________ 0 votes 0 answers A simply-supported beam of length $3L$ is subjected to the loading shown in the figure. It is given that $P=1\: N$, $L=1\:m$ and Young's modulus $E=200\:GPa$. The cross-section is a square with dimension $10\:mm\times10\:mm$. The bending stress ... beam at a distance of $1.5L$ from the left end is _____________ (Indicate compressive stress by a negative sign and tensile stress by a positive sign.) 0 votes 0 answers The figure shows cross-section of a beam subjected to bending. The area moment of inertia (in $mm^4$) of this cross-section about its base is ________ 0 votes 0 answers A horizontal bar with a constant cross-section is subjected to loading as shown in the figure. The Young’s moduli for the sections $AB$ and $BC$ are $3E$ and $E$, respectively. For the deflection at $C$ to be zero, the ratio $\displaystyle{\frac{P}{F}}$ is ____________
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9790539741516113, "perplexity": 588.4250752779913}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056120.36/warc/CC-MAIN-20210918002951-20210918032951-00367.warc.gz"}
https://www.jstage.jst.go.jp/article/jhps/51/1/51_19/_article/-char/ja/
Online ISSN : 1884-7560 Print ISSN : 0367-6110 ISSN-L : 0367-6110 ジャーナル フリー 2016 年 51 巻 1 号 p. 19-26 The accident at the Fukushima nuclear power plant in 2011 caused the release of large amounts of tellurium (Te) isotopes, with radio-cesium (Cs) and radio-iodine (I), into the environment. The total amounts of 127mTe and 129mTe released from the nuclear power plant were estimated as 1.1 × 1015 and 3.3 × 1015 Bq, respectively. At the location where the deposition of 129mTe was relatively large, the ratio of the radioactivity of 129mTe to that of 137Cs reportedly reached 1.49 on June 14, 2011. Since 127mTe has a relatively long half-life, it possibly contributed to the internal radiation dose at the early stage after the accident. In this paper, the ratio of the committed effective dose of 127mTe to that of 137Cs after the oral ingestion of rice was estimated by using various reported parameters. The relevant parameters are: 1) the deposition ratios of 127mTe, 129mTe, and 134Cs to 137Cs; 2) the deposition ratio of 127mTe to 129mTe; 3) the transfer factors of Te and Cs; and 4) the effective dose coefficients for 127mTe, 129mTe, 134Cs, and 137Cs. The ratios of the committed effective dose of 127mTe to that of 137Cs were calculated for adults after a single ingestion at the time of the rice harvest. The ratio was 0.45 where the 129mTe/137Cs in the soil was higher and 0.05 where the level of 129mTe/137Cs was average. The ratio of the committed effective dose from 129mTe and 127mTe to that from 137Cs for one year reached 0.55 and 9.03 at the location where the level of 129mTe/137Cs in the soil was higher. These data could indicate that radioactive Te should not be disregarded in reconstructing the internal radiation dose from food for one year after the accident.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8997625708580017, "perplexity": 3188.316759077753}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141205147.57/warc/CC-MAIN-20201130035203-20201130065203-00589.warc.gz"}
https://merenlab.org/software/anvio/help/main/programs/anvi-estimate-trna-taxonomy/
Microbial 'omics Brought to you by # anvi-estimate-trna-taxonomy [program] Estimates taxonomy at genome and metagenome level using tRNA sequences.. See program help menu or go back to the main page of anvi’o programs and artifacts. ## Usage This program uses the taxonomy associates of your tRNA sequences to estimate the taxonomy for genomes, metagenomes, or collection stored in your contigs-db. This is the final step in the trna-taxonomy workflow. Before running this program, you’ll need to have run anvi-run-trna-taxonomy on the contigs-db that you’re inputting to this program. ## Input options ### 1: Running on a single genome By default, this program will assume that your contigs-db contains only a single genome and will determine the taxonomy of that single genome. anvi-estimate-trna-taxonomy -c contigs-db This will give you only the best taxonomy hit for your genome based on your tRNA data. If you want to look under the hood and see what results from anvi-run-trna-taxonomy it’s using to get there, add the --debug flag. anvi-estimate-trna-taxonomy -c contigs-db \ --debug ### 2: Running on a metagenome In metagenome mode, this program will assume that your contigs-db contains multiple genomes and will try to give you an overview of the taxa within it. To do this, anvi’o will determine which anticodon has the most hits in your contigs (for example GGG), and then will look at the taxnomy hits for tRNA with that anticodon across your contigs. anvi-estimate-trna-taxonomy -c contigs-db \ --metagenome-mode If instead you want to look at a specific anticodon, you can specify that with the -S parameter. For example, to look at GGT, just run the following: anvi-estimate-trna-taxonomy -c contigs-db \ --metagenome-mode \ -S GGT ### 3: Running on multiple metagenomes You can use this program to look at multiple metagenomes by providing a metagenomes artifact. This is useful to get an overview of what kinds of taxa might be in your metagenomes, and what kinds of taxa they share. Running this anvi-estimate-trna-taxonomy --metagenomes metagenomes \ --output-file-prefix EXAMPLE will give you an output file containing all taxonomic levels found and their coverages in each of your metagenomes, based on their tRNA. ### 4: Estimating the taxonomy of bins You can use this program to estimate the taxonomy of all of the bins in a collection by providing the the collection and the associated profile-db. anvi-estimate-trna-taxonomy -c contigs-db \ --C collection \ --p profile-db When doing this, you can also put the final results into your profile-db as a misc-data-layers with the flag --update-profile-db-with-taxonomy ### 5: I don’t even have a contigs-db. Just a fasta file. This program can run the entire ad hoc sequence search without a contigs-db involved (just a fasta and number of target sequences as a percent of the total; default: 20 percent), but this is not recommended. However, if you provide other parameters, they will be ignored. anvi-estimate-trna-taxonomy --dna-sequence fasta \ --max-num-target-sequences 10 ## The Output Now that you’ve inputted your desired inputs, you think about whether you want an output and what it will look like. By default, this program won’t give you an output (just genome-taxonomy information in your contigs-db. However, if you add any of these output options, it will instead produce a genome-taxonomy-txt. ### Anticodon Frequencies If you want to look at the anticodon frequencies before getting taxonomy info at all (for example because you can’t decide which anticodon to use for input option 2), add the flag --report-anticodon-frequencies. This will report the anticodon frequencies to a tab-delimited file and quit the program. ### A single output To get a single output (a fancy table for your viewing pleasure), just add the output file path. In this example, the input will be a single contigs-db (input option 1), anvi-estimate-trna-taxonomy -c contigs-db \ -o path/to/output.txt This will give you a tab-delimited matrix with all levels of taxonomic information for the genome stored in your contigs-db. Specifically, the output is a genome-taxonomy-txt. If you want to focus on a single taxonomic level, use the parameter --taxonomic-level, like so: anvi-estimate-trna-taxonomy -c contigs-db \ -o path/to/output.txt \ --taxonomic-level genus You can also simplify the taxonomy names in the table with the flag --simplify-taxonomy-information If you’re running on a profile-db, you can also choose to add the anticodon coverage to the output with --compute-anticodon-coverages. ### Multiple outputs If you have multiple outputs (i.e. you are looking at multiple metagenomes (input option number 3) or you are looking at each anticodon individually with --per-anticodon-output-file), you should instead provide a output filename prefix. anvi-estimate-trna-taxonomy --metagenomes metagenomes \ --output-file-prefix EXAMPLE The rest of the options listed for the single output (i.e. focusing on a taxonomic level, simplifying taxonomy information, etc.) still apply. Edit this file to update this information. Are you aware of resources that may help users better understand the utility of this program? Please feel free to edit this file on GitHub. If you are not sure how to do that, find the __resources__ tag in this file to see an example.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1831900179386139, "perplexity": 3244.6194745541984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704792131.69/warc/CC-MAIN-20210125220722-20210126010722-00361.warc.gz"}
https://www.physicsforums.com/threads/circuits-problems.301321/
Circuits problems 1. Mar 20, 2009 clairez93 1. The problem statement, all variables and given/known data See image attached. Please ignore all work you see around the problem. :] I need help with 9, 10, and 11 in this picture. 2. Relevant equations V = IR 3. The attempt at a solution 9. $$R_{eq} = R + (\frac{1}{2R} + \frac{1}{2R})^{-1} = 2R = 24$$ $$V' = (0.5)(\frac{1}{2R} + \frac{1}{2R})^{-1} = 6 V$$ (this is the voltage drop across the parallel set of resistors of 2R and 2R) $$I_{2} = I_{3} = \frac{6}{2R} = 0.25$$ (this is the current across each of the parallel branches, and they are only equal because they have the same resistance and voltage] $$I_{1} = I_{2} + I_{3} = .25 + .25 = .5$$ $$V_{i} = I_{1}R = (.5)(12) = 6$$ $$E = V' + V_{i} = 6+6 = 12$$ Correct answer is actually 24 V. 10. $$R_{eq} = (\frac{1}{250} + \frac{1}{300})^{-1} = 136.364$$ $$V = IR_{eq}$$ $$24 = I(136.364)$$ $$I = 0.176$$ $$V' = (0.176)(136.364) = 24$$ (this part is weird, is it because the whole circuit's in parallel anyway so there's really no voltage drop across anything?) $$I = \frac{V}{R} = \frac{24}{300} = 0.08 A$$ Correct answer is actually 40 mA. 11. Taking both the loops clockwise: $$0 = -10I_{1} -20I_{2} - 5I_{1} + 50$$ $$0 = 20I_{2} - 10I_{3} - 40$$ Stuck from here. Any help or pointers would be appreciated. Attached Files: • P1060466.jpg File size: 17 KB Views: 67 2. Mar 21, 2009 tiny-tim Hi clairez93! Yes, I2 = I3 = I1/2, but where does 6/2R come from? Use I = I3 = I1/2 and apply Kirchhoff's rules to the outer circuit. 3. Mar 21, 2009 clairez93 I used I = V/R, and the voltage across it I thought was 6, and then the resistance is 2R, so I = 6/2R. Is that not correct? 4. Mar 21, 2009 tiny-tim why? 5. Mar 21, 2009 clairez93 From this. 6. Mar 21, 2009 tiny-tim ah! but 0.5 only the current through one of the 2Rs 7. Mar 21, 2009 clairez93 Aha! I see! so then the voltage drop across is really 12 V. And therefore I2 = 12 / 2R = 0.5 = I3 And therefore I1 = I2 + I3 = 1 And therefore the voltage across R is IR = 1*12 = 12. And therefore E = 24! Thanks! Now i am still having troubles with 10 & 11. 8. Mar 21, 2009 tiny-tim 10: I make the same as you … but much moire quickly, just by looking at the right-hand loop 11: use I1 = I2 + I3 9. Mar 21, 2009 clairez93 Are my loop equations correct? I just want to make sure before I solve. 10. Mar 22, 2009 clairez93 Okay, so my loop equations are wrong, I believe because I am getting odd numbers. 11. Mar 22, 2009 tiny-tim Show us? Similar Discussions: Circuits problems
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7524523735046387, "perplexity": 2195.1824452476185}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824543.20/warc/CC-MAIN-20171021024136-20171021044136-00866.warc.gz"}
https://www.physicsforums.com/threads/how-to-find-1-honest-man-out-of-99-with-100-dollars.513300/
How to find 1 honest man out of 99 with $100 dollars? 1. Jul 11, 2011 hahaputao While hiking through the mountains, you get lost and end up in a village, population 99. You only have$200 left. Everyone is greedy, so they won't do anything for free. The majority of the townsfolk are also honest, but some are troublemakers. You need to hire one of the honest citizens as your guide home, which will cost $100. But first you need to nd one that is honest. For$1, you can ask any villager if another villager is honest. If you ask an honest villager, they tell the truth. If you ask a troublemakers, they might say anything. (The villagers don't understand other kinds of questions.) Come up with a way to nd an honest villager for sure, and with enough money left to get home. important: 1. the villagers do NOT understand any other questions 2. the troublemakers may lie OR tell the truth 2. Jul 23, 2011 256bits 1. the villagers do NOT understand any other questions Then how would an honest villager know what you mean when you ask for a guide if they do not understand any other question except "Is he honest?"
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4047713577747345, "perplexity": 1749.0262805648886}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865468.19/warc/CC-MAIN-20180523082914-20180523102914-00165.warc.gz"}
http://www.ams.org/cgi-bin/bookstore/booksearch?fn=100&pg1=CN&s1=Fuchs_Martin&arg9=Martin_Fuchs
New Titles  |  FAQ  |  Keep Informed  |  Review Cart  |  Contact Us Quick Search (Advanced Search ) Browse by Subject General Interest Logic & Foundations Number Theory Algebra & Algebraic Geometry Discrete Math & Combinatorics Analysis Differential Equations Geometry & Topology Probability & Statistics Applications Mathematical Physics Math Education Return to List  Item: 1 of 1 Topics in the Calculus of Variations Martin Fuchs, Universität des Saarlandes, Saabrüchen, Germany A publication of Vieweg+Teubner. Vieweg Advanced Lectures in Mathematics 1994; 145 pp; softcover ISBN-10: 3-528-06623-7 ISBN-13: 978-3-528-06623-9 List Price: US$28 Member Price: US$25.20 Order Code: VWALM/2 This book illustrates two basic principles in the calculus of variations which are the question of existence of solutions and the closely related problem of regularity of minimizers. Chapter One studies variational problems for nonquadratic energy functionals defined on suitable classes of vector-valued functions where nonlinear constraints are incorporated. Problems of this type arise for mappings between Riemannian manifolds or in nonlinear elasticity. Using direct methods, the existence of generalized minimizers is rather easy to establish and it is then shown that regularity holds up to a set of small measure. Chapter two contains a short introduction into Geometric Measure Theory which serves as a basis for developing an existence theory for (generalized) manifolds with prescribed mean curvature and boundary in arbitrary dimensions and codimensions. One major aspect of the book is to concentrate on techniques and to present methods which turn out to be useful for applications in regularity theorems as well as for existence problems. A publication of Vieweg+Teubner. The AMS is exclusive distributor in North America. Vieweg+Teubner Publications are available worldwide from the AMS outside of Germany, Switzerland, Austria, and Japan. Readership Table of Contents Degenerate variational integrals with nonlinear side conditions, p-harmonic maps and related topics Manifolds of prescribed mean curvature in the setting of geometric measure theory Bibliography Index Return to List  Item: 1 of 1 AMS Home | Comments: [email protected] © Copyright 2014, American Mathematical Society Privacy Statement
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20123262703418732, "perplexity": 1966.8245427427412}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218186353.38/warc/CC-MAIN-20170322212946-00379-ip-10-233-31-227.ec2.internal.warc.gz"}
https://onepetro.org/SPELRBC/proceedings-abstract/18LRBC/1-18LRBC/D011S001R002/214592
## Abstract In this continuation work, we expand upon the Nagoo et al., SPE-190921 [1] seminal paper that unveiled for the first time a simple and direct analytical critical gas velocity diameter-and-inclination-dependent equation for predicting the onset of liquids loading in horizontal wellbores. Using this equation, we now introduce a new analytical method for quantifying lost liquids production in liquids-rich horizontal gassy oil and gas wells undergoing liquids loading. Case studies of in-operation horizontal wells from the Permian and Delaware basins are used to highlight and validate the methodology.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.959592878818512, "perplexity": 4444.330433244559}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038464146.56/warc/CC-MAIN-20210418013444-20210418043444-00494.warc.gz"}
http://mathematica.stackexchange.com/questions/23920/rising-recursion-relationships/23925
# Rising Recursion Relationships Lets say I want to compute the following function in mathematica: $G[n,k]=G[n+1,k-1] + G[n+2,k-2]$ where I know that $G[n,0]=n$ and $G[n,1]=n^2$. So, for example, $G[3,2]=G[4,1]+G[5,0]=4^2+5$ or, less trivially, $G[5,4]=G[6,3]+G[7,2]=(G[7,2]+G[8,1])+(G[8,1]+G[9,0])=((G[8,1]+G[9,0])+G[8,1])+(G[8,1]+G[9,0])= 3G[8,1]+2G[9,0]$ So I attempt it with the following code RecurrenceTable[{s1[n, k] == s1[n + 1, k - 1] - s1[n + 2, k - 2], s1[r, 1] == r, s1[r, 0] == r^2}, s1, {n, 2, 6}, {k, 2, 4}] // Grid However it spits out lots of error messages relating to functions being called with two variables when only one is expected. - It is defined via that recursion relationship with those initial conditions? It is the equivalent of $G[n,k]$ in the example. –  Benjamin Horowitz Apr 24 '13 at 4:11 Your sample code doesn't spit out any error message here. It just doesn't evaluate to anything. Have you tried restarting the kernel? –  halirutan Apr 24 '13 at 4:23 I did, I should mention I am using Mathematica 7, perhaps later editions are smarter... –  Benjamin Horowitz Apr 24 '13 at 4:28 Unfortunately, I have only 8.0.4 as oldest version here and I cannot test it in 7. In version 8 are no error messages too. Is it seems, RSolve and RecurrenceTable cannot help you with your problem anyway. –  halirutan Apr 24 '13 at 4:34 You can always define the recursive function yourself and use memoizing to speed up computation: g[n_, 0] := g[n, 0] = n; g[n_, 1] := g[n, 1] = n^2; g[n_, k_] := g[n, k] = g[n + 1, k - 1] + g[n + 2, k - 2]; Table[g[n, k], {k, 0, 10}, {n, 0, 10}] // TableForm - Note, this can be solved in general form. Start as RSolve[{G[n, k] == G[n + 1, k - 1] + G[n + 2, k - 2]}, G[n, k], {n, k}] You have two unknown functions C(1)[x] and C(2)[x] that you can find using your boundary conditions. A[n_] = C[1][n] /. Solve[n == (-(1/2) - Sqrt[5]/2)^n C[1][n] + (-(1/2) + Sqrt[5]/2)^ n C[2][n], C[1][n]][[1]] B[n_] = C[1][1 + n] /. Solve[n^2 == (-(1/2) - Sqrt[5]/2)^ n C[1][n + 1] + (-(1/2) + Sqrt[5]/2)^n C[2][n + 1], C[1][n + 1]][[1]] Combine the two above to find function C(2)[n] - I rename it S2[n]: S2[n_] = C[2][1 + n] /. Solve[A[n + 1] == B[n], C[2][1 + n]][[1]] /. n -> n - 1 // FullSimplify Substitute this in A[n] to find C(1)[n] - I rename it S1[n] S1[n_] = (-(1/2) - Sqrt[5]/2)^-n (n - (-(1/2) + Sqrt[5]/2)^n S2[n]) //FullSimplify Finally substitute both in the very original solution to find the final function: SolvedG[n_, k_] = (-(1/2) - Sqrt[5]/2)^n S1[k + n] + (-(1/2) + Sqrt[5]/2)^ n S2[k + n] // FullSimplify; So here - you got it - the beauty of math: No, this is just a simplified version, my actual problem is non-linear in terms of $n$ and $k$. –  Benjamin Horowitz Apr 24 '13 at 6:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4473985731601715, "perplexity": 3752.9410370297023}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997886087.7/warc/CC-MAIN-20140722025806-00008-ip-10-33-131-23.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/integration-problem.188539/
# Integration problem 1. Oct 2, 2007 ### architect Hi all, I would like to get some advice regarding a difficult integration problem. $$\int_{\varphi_{o}-\Delta\varphi}^{\varphi_{o}+\Delta\varphi}\int_{\vartheta_{o}-\Delta\vartheta}^{\vartheta_{o}+\Delta\vartheta}e^{\kappa[sin\vartheta_{o}\sin\vartheta\sin\varthetacos(\varphi-\varphi_{o})+cos\vartheta_{o}cos\vartheta]}sin\vartheta\partial\vartheta\partial\varphi$$ Equation 1 Let me briefly explain what I have attempted so far. I know that I can re-write Equation 1 as $$\int_{\varphi_{o}-\Delta\varphi}^{\varphi_{o}+\Delta\varphi}\int_{\vartheta_{o}-\Delta\vartheta}^{\vartheta_{o}+\Delta\vartheta}e^{\kappa\cos\gamma}sin\vartheta\partial\vartheta\partial\varphi$$ Equation 2 This is because $$\gamma$$ can be thought of as the angular displacement from $$(\theta,\varphi) to (\theta_{o},\varphi_{o})$$. We can therefore write $$\cos\gamma=sin\vartheta_{o}\sin\vartheta\sin\varthetacos(\varphi-\varphi_{o})+cos\vartheta_{o}cos\vartheta$$ Since I could not see a straight forward solution I thought of re-writing the above integral in terms of an infinite series represantation and then integrate the series(which would be simpler). I noticed that in Abramowitz and Stegun 's's book (Handbook of Mathematical Functions) there is an equation which might help me achieve my goal. This is equation 10.2.36 and can be found in the following website. $$e^{\kappa\cos\gamma}=\sum^{\infty}_{0}(2n+1)[\sqrt{\frac{\pi}{2\kappa}}I_{n+1/2}(\kappa)]P_{n}(cos\gamma)$$ Equation 3 or 10.2.36 in Abramowitz&Stegun where we have the modified bessel function and a Legendre polynomial. Therefore my new integrand will be: $$\int_{\varphi_{o}-\Delta\varphi}^{\varphi_{o}+\Delta\varphi}\int_{\vartheta_{o}-\Delta\vartheta}^{\vartheta_{o}+\Delta\vartheta}\sum^{\infty}_{0}(2n+1)[\sqrt{\frac{\pi}{2\kappa}}I_{n+1/2}(\kappa)]P_{n}(cos\gamma)sin\vartheta\partial\vartheta\partial\varphi$$ Equation 4 Now I think I must do something about the modified bessel function and the Legendre polynomial. Do you think I can substitute Equation 10.2.5 from Abramowitz&Stegun (http://www.math.sfu.ca/~cbm/aands/page_443.htm) for the modified Bessel Funtion? How would I re-write the Legendre polynomial in this case? Can I use the Addition Theorem of Spherical Harmonics?Or I will complicate things even further?If I use the Addition Theorem then I would get: $$P_{n}(\cos\gamma)=\sum^{n}_{m=-n}\Upsilon^{*}_{nm}(\vartheta_{o},\varphi_{o})\Upsilon{nm}(\vartheta,\varphi)$$ Will this be correct? Please advise if you think that the procedure I have followed so far is incorrect!!! Thanks & Regards Alex Last edited: Oct 2, 2007 2. Oct 3, 2007 ### AiRAVATA I think that would be correct. The problem can be justifying the exchange of the double integral with the sum. Is the first integral in it's original form or have you already transformed it? Maybe there is a simpler way to express it? ---EDIT--- I've found this two identities in N.N. Lebedev's book: $$P_n(\cos \theta)=\frac{1}{\pi}\int_0^\pi (\cos \theta +i\sin \theta \cos \varphi)^n d\varphi,\qquad 0<\theta<\pi.$$ $$e^{ika \cos \varphi}=J_0(ka)+2\sum_{n=1}^\infty (-1)^nJ_n(ka) \cos n\varphi,$$ where $J_n$ are Bessel functions of order $n$. Maybe this can help. Last edited: Oct 3, 2007 3. Oct 3, 2007 ### architect Hi, Many Thanks for your reply. Yes the first double integral is in its original form (Equation 1). The expression that you wrote for the Legendre polynomial from Lebedev's book will it not make it more complicated(because of the integral)? Thanks again Regards Alex 4. Oct 4, 2007 ### AiRAVATA Could be, the only way to find out is to get your hands dirty! I posted both formulas in order to give you another choice for your calculations. The thing with special functions is that some representations make the work easier, while others make it impossible. Let me know how it went. Similar Discussions: Integration problem
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9593392610549927, "perplexity": 697.6276221257565}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187828189.71/warc/CC-MAIN-20171024071819-20171024091819-00389.warc.gz"}
https://en.wikipedia.org/wiki/Back-face_culling
# Back-face culling On the left a model without BFC; on the right the same model with BFC: back-faces are removed. In computer graphics, back-face culling determines whether a polygon of a graphical object is drawn. It is a step in the graphical pipeline that tests whether the points in the polygon appear in clockwise or counter-clockwise order when projected onto the screen. If the user has specified that front-facing polygons have a clockwise winding, but the polygon projected on the screen has a counter-clockwise winding then it has been rotated to face away from the camera and will not be drawn. The process makes rendering objects quicker and more efficient by reducing the number of polygons for the program to draw. For example, in a city street scene, there is generally no need to draw the polygons on the sides of the buildings facing away from the camera; they are completely occluded by the sides facing the camera. In general, back-face culling can be assumed to produce no visible artifact in a rendered scene if it contains only closed and opaque geometry. In scenes containing transparent polygons, rear-facing polygons may become visible through the process of alpha composition. In wire-frame rendering, back-face culling can be used to partially address the problem of hidden line removal, but only for closed convex geometry. A related technique is clipping, which determines whether polygons are within the camera's field of view at all. Another similar technique is Z-culling, also known as occlusion culling, which attempts to skip the drawing of polygons that are covered from the viewpoint by other visible polygons. In non-realistic renders certain faces can be culled by whether or not they are visible, rather than facing away from the camera. "inverted hull" or "front face culling" can be used to simulate outlines or toon shaders without post-processing effects.[1] ## Implementation One method of implementing back-face culling is by discarding all triangles where the dot product of their surface normal and the camera-to-triangle vector is greater than or equal to zero ${\displaystyle \left(V_{0}-P\right)\cdot N\geq 0}$ where P is the view point, V0 is the first vertex of a triangle and N is its normal, defined as a cross product of two vectors representing sides of the triangle adjacent to V0 ${\displaystyle N=\left(V_{1}-V_{0}\right)\times \left(V_{2}-V_{0}\right)}$ Since cross product is non-commutative, defining the normal in terms of cross product allows to specify normal direction relative to triangle surface using vertex order(winding): ${\displaystyle \left(V_{1}-V_{0}\right)\times \left(V_{2}-V_{0}\right)=-\left(V_{2}-V_{0}\right)\times \left(V_{1}-V_{0}\right)}$ If points are already in view space, P can be assumed to be (0, 0, 0), the origin. ${\displaystyle -V_{0}\cdot N\geq 0}$ It is also possible to use this method in projection space by representing the above inequality as a determinant of a matrix and applying the projection matrix to it.[2] Another method exists based on reflection parity, which is more appropriate for two dimensions where the surface normal cannot be computed (also known as CCW check). Let a unit triangle in two dimensions (homogeneous coordinates) be defined as ${\displaystyle U_{0}={\begin{bmatrix}0\\0\\1\end{bmatrix}},U_{1}={\begin{bmatrix}1\\0\\1\end{bmatrix}},U_{2}={\begin{bmatrix}0\\1\\1\end{bmatrix}}}$ Then for some other triangle, also in two dimensions, ${\displaystyle V_{0}={\begin{bmatrix}x_{0}\\y_{0}\\1\end{bmatrix}},V_{1}={\begin{bmatrix}x_{1}\\y_{1}\\1\end{bmatrix}},V_{2}={\begin{bmatrix}x_{2}\\y_{2}\\1\end{bmatrix}}}$ define a matrix that transforms the unit triangle: ${\displaystyle M={\begin{bmatrix}x_{1}-x_{0}&x_{2}-x_{0}&x_{0}\\y_{1}-y_{0}&y_{2}-y_{0}&y_{0}\\0&0&1\end{bmatrix}}}$ so that: ${\displaystyle MU_{0}=V_{0}}$ ${\displaystyle MU_{1}=V_{1}}$ ${\displaystyle MU_{2}=V_{2}}$ Discard the triangle if matrix M contained an odd number of reflections (facing the opposite way of the unit triangle) ${\displaystyle \left|M\right|<0}$ The unit triangle is used as a reference and transformation M is used as a trace to tell if vertex order is different between two triangles. The only way vertex order can change in two dimensions is by reflection. Reflection is an example of involutory function (with respect to vertex order), therefore an even number of reflections will leave the triangle facing the same side, as if no reflections were applied at all. An odd number of reflections will leave the triangle facing the other side, as if exactly after one reflection. Transformations containing an odd number of reflections always have a negative scaling factor, likewise, the scaling factor is positive if there are no reflections or even a number of them. The scaling factor of a transformation is computed by determinant of its matrix. ## References 1. ^ 2. ^ David H. Eberly (2006). 3D Game Engine Design: A Practical Approach to Real-Time Computer Graphics, p. 69. Morgan Kaufmann Publishers, United States. ISBN 0122290631.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 11, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5569099187850952, "perplexity": 859.6730657129276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949958.54/warc/CC-MAIN-20230401094611-20230401124611-00140.warc.gz"}
https://support.numxl.com/hc/en-us/articles/214239466-TEST-MEAN-Population-Mean-Test
# TEST_MEAN - Population Mean Test Calculates the p-value of the statistical test for the population mean. ## Syntax TEST_MEAN(x, mean, Return_type, Alpha) x is the input data sample (one/two dimensional array of cells (e.g. rows or columns)) mean is the assumed population mean. If missing, the default value of zero is assumed. Return_type is a switch to select the return output (1 = P-Value (default), 2 = Test Stats, 3 = Critical Value. Method Description 1 P-Value 2 Test Statistics (e.g. Z-score) 3 Critical Value Alpha is the statistical significance of the test (i.e. alpha). If missing or omitted, an alpha value of 5% is assumed. ## Remarks 1. The sample data may include missing values (e.g. #N/A). 2. The test hypothesis for the population mean: $$H_{o}: \mu=\mu_o$$ $$H_{1}: \mu\neq \mu_o$$ Where: • $H_{o}$ is the null hypothesis. • $H_{1}$ is the alternate hypothesis. • $\mu_o$ is the assumed population mean. • $\mu$ is the actual population mean. 3. For the case in which the underlying population distribution is normal, the sample mean/average has a Student's t with T-1 degrees of freedom sampling distribution: $$\bar x \sim t_{\nu=T-1}(\mu,\frac{S^2}{T})$$ Where: • $\bar x$ is the sample average. • $\mu$ is the population mean/average. • $S$ is the sample standard deviation. $$S^2 = \frac{\sum_{i=1}^T(x_i-\bar x)^2}{T-1}$$ • $T$ is the number of non-missing values in the data sample. • $t_{\nu}()$ is the Student's t-Distribution. • $\nu$ is the degrees of freedom of the Student's t-Distribution. 4. The Student's t-Test for the population mean can be used for small and for large data samples. 5. This is a two-sides (i.e. two-tails) test, so the computed p-value should be compared with half of the significance level ($\alpha/2$). 6. The underlying population distribution is assumed normal (gaussian). ## Examples Example 1: 1 2 3 4 5 6 7 8 9 10 11 A B Date Data 1/1/2008 #N/A 1/2/2008 -0.95 1/3/2008 -0.88 1/4/2008 1.21 1/5/2008 -1.67 1/6/2008 0.83 1/7/2008 -0.27 1/8/2008 1.36 1/9/2008 -0.34 1/10/2008 0.48 Formula Description (Result) =AVERAGE($B$2:$B$11) Sample mean (-0.0256) =TEST_MEAN($B$2:$B$11,0) p-value of the test (0.472) ## References • George Casella; Statistical Inference; Thomson Press (India) Ltd; (Dec 01, 2008), ISBN: 8131503941 • K.L. Lange, R.J.A. Little and J.M.G. Taylor. "Robust Statistical Modeling Using the t Distribution." Journal of the American Statistical Association 84, 881-896, 1989 • Hurst, Simon, The Characteristic Function of the Student-t Distribution , Financial Mathematics Research Report No. FMRR006-95, Statistics Research Report No. SRR044-95
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8938500881195068, "perplexity": 3707.942273506844}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514575674.3/warc/CC-MAIN-20190922201055-20190922223055-00501.warc.gz"}
https://gianlubaio.blogspot.com/2016/06/silver-lining.html
## Thursday, 30 June 2016 ### Silver lining Fivethirtyeight has just published their first prediction for the next US presidential election, stating that Clinton has around 80% chance of winning to Trumps' 20%. This has been also reported in the general media (for example here). I think the tone of the Guardian's article is kind of interesting $-$ basically if first praises Nate Silver's ability but also points out a series of "high-profile misses that could lead some observers to discount their predictions this year". Author Tom McCarthy goes on to report on very wrong predictions for example on Trump's chance of securing the Republican nomination. I guess this is such a fluid and dynamic situation that perhaps it's a bit too early to call a definitive outcome. But I'm sure we'll be bombarded with predictions in the next few months...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3498401641845703, "perplexity": 1974.0940185032416}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247486936.35/warc/CC-MAIN-20190218135032-20190218161032-00316.warc.gz"}
http://physics.stackexchange.com/questions/38925/satellite-orbital-period?answertab=votes
# Satellite Orbital Period [closed] I know I can calculate the period of a satellite orbit by Kepler's third law, but somehow it does not work out. The sattelite is 20200km from surface of the earth. • $r=$orbits radius=earths radius+satellites distance from surface of earth=20,200,000+6,378,000 = 26,578,000 m • $G=6.67\cdot10^{-11}$ • $M =$mass of earth $= 5.9722\cdot10^{24}$ now $T=(4\pi^2r^3/GM)^{1/2} = 43108,699\ \mathrm{s} \Rightarrow T=11.975\ \mathrm{hours}$ BUT that isn't correct, as all the calculators say it is 16,53 I have no idea what I am doing wrong. I even followed this example and I got everything right using the numbers in the example, but as soon as I put in my 26,578,000 m I got a different solution. Even though I did not change anything else. What am I missing? - I don't know what you mean by "all the calculators", but the period of a satellite at that height is indeed about 12 hours, not 16.5 –  Mark Eichenlaub Oct 3 '12 at 0:22 Welcome to Physics Stack Exchange! Please see our homework policy. We expect homework problems to have some effort put into them, and deal with conceptual issues. If you edit your question to explain (1) What you have tried, (2) the concept you have trouble with, and (3) your level of understanding, I'll be happy to reopen this. (Flag this message for ♦ attention with a custom message, or reply to me in the comments with @Manishearth to notify me) –  Manishearth Dec 29 '12 at 15:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7465904951095581, "perplexity": 706.5304966382213}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999675924/warc/CC-MAIN-20140305060755-00090-ip-10-183-142-35.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/89539/normalized-correlation-with-a-constant-vector
# Normalized correlation with a constant vector I am confused how to interpret the result of preforming a normalized correlation with a constant vector. Since you have to divide by the standard devation of both vectors (reference: http://en.wikipedia.org/wiki/Cross-c...ss-correlation ) , if one of them is constant (say a vector of all 5's, which has standard deviation=0), then the correlation is infinity, but in fact the correlation should be zero right? This isn't just a corner case, in general if the standard deviation of one of the vectors is small, the correlation to any other vector is very high. Can anyone explain my misinterpretation? Thanks, David - You might have a better chance of getting a good answer at stats.stackexchange.com . –  Angelo Feb 26 '12 at 4:57 The covariance is $0$, the correlation is undefined. Correlations (when defined) are always in the interval $[-1,1]$. –  Robert Israel Feb 26 '12 at 18:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9470335841178894, "perplexity": 388.58955746014686}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931014049.81/warc/CC-MAIN-20141125155654-00222-ip-10-235-23-156.ec2.internal.warc.gz"}
http://www.scalaformachinelearning.com/2017/12/
## Thursday, December 7, 2017 ### Covariant and Contravariant Functors in Scala Scala is a first class functional programming and object-oriented language which supports among other concepts, higher-kind types, functors and monads. This post illustrates the capability of Scala to leverage the concepts of covariant and contravariant functors for tensor analysis with application to vector fields Note: This post requires a solid knowledge of functional programming as well understanding of differential geometry. Overview Most of Scala developers have some experience with the core tenets of functional programming: monads, functors and applicatives. Those concepts are not specific to Scala or even functional programming at large. There are elements of a field in Mathematics known as topology or algebraic topology. Differential geometry or differential topology makes heavy use of tensors that leverage covariant and contravariant functors. This post introduces the concepts of • Contravariant functors applied to co-vectors and differential forms • Projection of higher kind Vector fields 101 Let's consider a 3 dimension Euclidean space with basis vector {ei} and a vector field V (f1, f2, f3) [Note: we follow Einstein tensor indices convention] The vector field at the point P(x,y,z) as the tuple (f1(x,y,z), f2(x,y,z), f3(x,y,z)). The vector over a field of k dimension field can be formally. mathematically defined as $f: \boldsymbol{x} \,\, \epsilon \,\,\, \mathbb{R}^{k} \mapsto \mathbb{R} \\ f(\mathbf{x})=\sum_{i=1}^{n}{f^{i}}(\mathbf{x}).\mathbf{e}^{i}$ Example: $f(x,y,z) = 2x+z^{3}\boldsymbol{\mathbf{\overrightarrow{i}}} + xy+e^{-y}-z^{2}\boldsymbol{\mathbf{\overrightarrow{j}}} + \frac{x^{3}}{y}\boldsymbol{\mathbf{\overrightarrow{k}}}$ Now, let's consider the same vector V with a second reference (origin O' and basis vector e'i $f(\mathbf{x})=\sum_{i=1}^{n}{f'_{i}}(\mathbf{x}).\mathbf{e'}_{i}$ The transformation matrix Sij convert the coordinates value functions fi and f'i. The tuple f =(fi) or more accurately defined as (fi) is the co-vector field for the vector field V $S_{ij}: \begin{Vmatrix} f^{1} \\ f^{2} \\ f^{3} \end{Vmatrix} \mapsto \begin{Vmatrix} f'^{1} \\ f'^{2} \\ f'^{3} \end{Vmatrix}$ The scalar product of the co-vector f' and vector v(f) defined as is defined as $< f',v> = \sum f'_{i}.f^{i}$ Given the scalar product we can define the co-vector field f' as a linear map $\alpha (v) = < f',v> (1)$ Covariant functors I assume the reader has basic understanding of Functor and Monads. Here is short overview: A category C is composed of object x and morphism f defined as $C= \{ {x_{i}, f \in C | f: x_{i} \mapsto x_{j}} \}$ A functor F is a map between two categories C and D that preserves the mapping. $x\in C \Rightarrow F(x)\in D \\ x, y\in C \,\,\, F: x \mapsto y => F(x)\mapsto F(y)$ Let's look at the definition of a functor in Scala with the "preserving" mapping method, map 1 2 3 trait Functor[M[_]] { def map[U, V](m: M[U])(f: U => V): M[V] } Let's define the functor for a vector (or tensor) field. A vector field is defined as a sequence or list of fields (i.e. values or function values). type VField[U] = List[U] trait VField_Ftor extends Functor[VField] { override def map[U, V](vu: VField[U])(f: U => V): VField[V] = vu.map(f) } This particular implementation relies on the fact that List is a category with its own functor. The next step is to define the implicit class conversion VField[U] => Functor[VField[U]] so the map method is automatically invoked for each VField instance. implicit class vField2Functor[U](vu: VField[U]) extends VField_Ftor { final def map[V](f: U => V): VField[V] = super.map(vu)(f) } By default Covariant Functors (which preserve mapping) are known simply as Functors. Let's look at the case of Covector fields. Contravariant functors A Contravariant functor is a map between two categories that reverses the mapping of morphisms. $x, y\in C \,\,\, F: x \mapsto y => F(y)\mapsto F(x)$ trait CoFunctor[M[_]] { def map[U, V](m: M[U])(f: V => U): M[V] } The map method of the Cofunctor implements the relation M[V->U] => M[U]->M[V] Let's implement a co-vector field using a contravariant functor. The definition (1) describes a linear map between a vector V over a field X to the scalar product V*: V => T. A morphism on the category V* consists of a morphism of V => T or V => _ where V is a vector field and T or _ is a scalar function value. type CoField[V, T] = Function1[V, T] The co-vector field type, CoField is parameterized on the vector field type V which is a input or function parameter. Therefore the functor has to be contravariant. The higher kind type M[_] takes a single type as parameter (i.e. M[V]) but a co-vector field requires two types: • V: Vector field • T: The scalar function is that the result of the inner product <.> Fortunately the contravariant functor CoField_Ftor associated with the co-vector needs to be parameterized only with the vector field V. The solution is to pre-defined (or 'fix') the scalar type T using a higher kind projector for the type L[V] => CoField[V, T] T => ({type L[X] = CoField[X,T]})#L trait CoField_Ftor[T] extends CoFunctor[({type L[X] = CoField[X,T]})#L ] { override def map[U,V]( vu: CoField[U,T] )(f: V => U): CoField[V,T] = (v: V) => vu(f(v)) } As you can see the morphism over the type V on the category CoField is defined as f: V => U instead of f: U => V. A kind parameterized on the return type (Function1) would require the 'default' (covariant) functor. Once again, we define an implicit class to convert a co-vector field, of type CoField to its functor, CoField2Ftor implicit class CoField2Ftor[U,T](vu: CoField[U,T]) extends CoField_Ftor[T] { final def map[V](f: V => U): CoField[V,T] = super.map(vu)(f) } Evaluation Let's consider a field of function values FuncD of two dimension: v(x,y) = f1(x,y).i + f2(x,y.j. The Vector field VField is defined as a list of two function values. type DVector = Array[Double] type FuncD = Function1[DVector, Double] type VFieldD = VField[FuncD] The vector is computed by assigning a vector field to a specific point (P(1.5, 2.0). The functor is applied to the vector field, vField to generate a new vector field vField2 val f1: FuncD = new FuncD((x: DVector) => x(0)*x(1)) val f2: FuncD = new FuncD((x: DVector) => -2.0*x(1)*x(1)) val vfield: VFieldD = List[FuncD](f1, f2) val value: DVector = Array[Double](1.5, 2.0) val vField2: VFieldD = vfield.map( _*(4.0)) A co-vector field, coField is computed as the sum of the fields (function values) (lines 1, 2). Next, we compute the product of co-vector field and vector field (scalar field product) (line 6). We simply apply the co-vector Cofield (linear map) to the vector field. Once defined, the morphism _morphism is used to generate a new co-vector field coField2 through the contravariant function CoField2Functor.map(line 10). Finally a newProduction is generated by composing the original covariant field Cofield and the linear map coField2 (line 12). 1 2 3 4 5 6 7 8 9 10 11 12 val coField: CoField[VFieldD, FuncD] = (vf: VFieldD) => vf(0) + vf(1) val contraFtor: CoField2Functor[VFieldD, FuncD] = coField val product = coField(vField) val _morphism: VFieldD => VFieldD = (vf: VFieldD) => vf.map( _*(3.0)) val coField2 = contraFtor.map( _morphism ) val newProduct: FuncD = coField2(coField) Environment Scala 2.11.8 JDK 1.8 References • Tensor Analysis on Manifolds - R. Bishop, S. Goldberg - Dover publication 1980 • Differential Geometric Structures - W. Poor - Dover publications 2007 • Functors and Natural Transformationsv- A. Tarlecki - Category Theory 2014
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9027256965637207, "perplexity": 4973.909237296876}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00645.warc.gz"}
https://www.geeksforgeeks.org/katz-centrality-centrality-measure/?ref=rp
Related Articles # Katz Centrality (Centrality Measure) • Last Updated : 13 Feb, 2018 In graph theory, the Katz centrality of a node is a measure of centrality in a network. It was introduced by Leo Katz in 1953 and is used to measure the relative degree of influence of an actor (or node) within a social network. Unlike typical centrality measures which consider only the shortest path (the geodesic) between a pair of actors, Katz centrality measures influence by taking into account the total number of walks between a pair of actors. It is similar to Google’s PageRank and to the eigenvector centrality. Measuring Katz centrality A simple social network: the nodes represent people or actors and the edges between nodes represent some relationship between actors Katz centrality computes the relative influence of a node within a network by measuring the number of the immediate neighbors (first degree nodes) and also all other nodes in the network that connect to the node under consideration through these immediate neighbors. Connections made with distant neighbors are, however, penalized by an attenuation factor . Each path or connection between a pair of nodes is assigned a weight determined by and the distance between nodes as . For example, in the figure on the right, assume that John’s centrality is being measured and that . The weight assigned to each link that connects John with his immediate neighbors Jane and Bob will be . Since Jose connects to John indirectly through Bob, the weight assigned to this connection (composed of two links) will be . Similarly, the weight assigned to the connection between Agneta and John through Aziz and Jane will be and the weight assigned to the connection between Agneta and John through Diego, Jose and Bob will be . Mathematical formulation Let A be the adjacency matrix of a network under consideration. Elements of A are variables that take a value 1 if a node i is connected to node j and 0 otherwise. The powers of A indicate the presence (or absence) of links between two nodes through intermediaries. For instance, in matrix , if element , it indicates that node 2 and node 12 are connected through some first and second degree neighbors of node 2. If denotes Katz centrality of a node i, then mathematically: Note that the above definition uses the fact that the element at location of the adjacency matrix raised to the power (i.e. ) reflects the total number of degree connections between nodes and . The value of the attenuation factor has to be chosen such that it is smaller than the reciprocal of the absolute value of the largest eigenvalue of the adjacency matrix A. In this case the following expression can be used to calculate Katz centrality: Here is the identity matrix, is an identity vector of size n (n is the number of nodes) consisting of ones. denotes the transposed matrix of A and ( denotes matrix inversion of the term ( ). Following is the code for the calculation of the Katz Centrality of the graph and its various nodes. def katz_centrality(G, alpha=0.1, beta=1.0,                    max_iter=1000, tol=1.0e-6,                     nstart=None, normalized=True,                    weight = 'weight'):    """Compute the Katz centrality for the nodes         of the graph G.        Katz centrality computes the centrality for a node     based on the centrality of its neighbors. It is a     generalization of the eigenvector centrality. The    Katz centrality for node i is      .. math::          x_i = \alpha \sum_{j} A_{ij} x_j + \beta,      where A is the adjacency matrix of the graph G     with eigenvalues \lambda.      The parameter \beta controls the initial centrality and      .. math::          \alpha < \frac{1}{\lambda_{max}}.        Katz centrality computes the relative influence of    a node within a network by measuring the number of     the immediate neighbors (first degree nodes) and      also all other nodes in the network that connect    to the node under consideration through these     immediate neighbors.      Extra weight can be provided to immediate neighbors    through the parameter :math:\beta.  Connections     made with distant neighbors are, however, penalized    by an attenuation factor \alpha which should be     strictly less than the inverse largest eigenvalue     of the adjacency matrix in order for the Katz    centrality to be computed correctly.         Parameters    ----------    G : graph      A NetworkX graph      alpha : float      Attenuation factor      beta : scalar or dictionary, optional (default=1.0)      Weight attributed to the immediate neighborhood.       If not a scalar, the dictionary must have an value      for every node.      max_iter : integer, optional (default=1000)      Maximum number of iterations in power method.      tol : float, optional (default=1.0e-6)      Error tolerance used to check convergence in      power method iteration.      nstart : dictionary, optional      Starting value of Katz iteration for each node.      normalized : bool, optional (default=True)      If True normalize the resulting values.      weight : None or string, optional      If None, all edge weights are considered equal.      Otherwise holds the name of the edge attribute      used as weight.      Returns    -------    nodes : dictionary       Dictionary of nodes with Katz centrality as        the value.      Raises    ------    NetworkXError       If the parameter beta is not a scalar but        lacks a value for at least  one node               Notes    -----          This algorithm it uses the power method to find    the eigenvector corresponding to the largest     eigenvalue of the adjacency matrix of G.    The constant alpha should be strictly less than     the inverse of largest eigenvalue of the adjacency    matrix for the algorithm to converge.    The iteration will stop after max_iter iterations     or an error tolerance ofnumber_of_nodes(G)*tol      has been reached.      When \alpha = 1/\lambda_{max} and \beta=0,     Katz centrality is the same as eigenvector centrality.      For directed graphs this finds "left" eigenvectors    which corresponds to the in-edges in the graph.    For out-edges Katz centrality first reverse the     graph with G.reverse().            """    from math import sqrt      if len(G) == 0:        return {}      nnodes = G.number_of_nodes()      if nstart is None:          # choose starting vector with entries of 0        x = dict([(n,0) for n in G])    else:        x = nstart      try:        b = dict.fromkeys(G,float(beta))    except (TypeError,ValueError,AttributeError):        b = beta        if set(beta) != set(G):            raise nx.NetworkXError('beta dictionary '                                   'must have a value for every node')      # make up to max_iter iterations    for i in range(max_iter):        xlast = x        x = dict.fromkeys(xlast, 0)          # do the multiplication y^T = Alpha * x^T A - Beta        for n in x:            for nbr in G[n]:                x[nbr] += xlast[n] * G[n][nbr].get(weight, 1)        for n in x:            x[n] = alpha*x[n] + b[n]          # check convergence        err = sum([abs(x[n]-xlast[n]) for n in x])        if err < nnodes*tol:            if normalized:                  # normalize vector                try:                    s = 1.0/sqrt(sum(v**2 for v in x.values()))                  # this should never be zero?                except ZeroDivisionError:                    s = 1.0            else:                s = 1            for n in x:                x[n] *= s            return x      raise nx.NetworkXError('Power iteration failed to converge in '                           '%d iterations.' % max_iter) The above function is invoked using the networkx library and once the library is installed, you can eventually use it and the following code has to be written in python for the implementation of the katz centrality of a node. >>> import networkx as nx>>> import math>>> G = nx.path_graph(4)>>> phi = (1+math.sqrt(5))/2.0 # largest eigenvalue of adj matrix>>> centrality = nx.katz_centrality(G,1/phi-0.01)>>> for n,c in sorted(centrality.items()):...    print("%d %0.2f"%(n,c)) The output of the above code is: 0 0.371 0.602 0.603 0.37 The above result is a dictionary depicting the value of katz centrality of each node. The above is an extension of my article series on the centrality measures. Keep networking!!! Attention reader! Don’t stop learning now. Get hold of all the important DSA concepts with the DSA Self Paced Course at a student-friendly price and become industry ready.  To complete your preparation from learning a language to DS Algo and many more,  please refer Complete Interview Preparation Course. In case you wish to attend live classes with experts, please refer DSA Live Classes for Working Professionals and Competitive Programming Live for Students. My Personal Notes arrow_drop_up
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7510277032852173, "perplexity": 1240.704168249653}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057274.97/warc/CC-MAIN-20210921221605-20210922011605-00144.warc.gz"}
https://blog.flyingcoloursmaths.co.uk/getting-closer-pi/
A lovely curiosity came my way via @mikeandallie and @divbyzero: Isn’t that neat? If I use an estimate $p = 3.142$, then this method gives $\pi \approx p + \sin(p) = 3.141\ 592\ 653\ 6$, which is off by about $10^{-12}$ – even better than Shanks suggests. So, why does it work? It’s a two-step chain of reasoning: a trig identity and a Maclaurin series. The trig identity is that $\sin(\pi - p) = \sin(p)$, by symmetry. We’ve picked $p$ so that $\pi - p$ is small – in fact, smaller than $5 \times 10^{-n-1}$. When an angle $x$ is small, $\sin(x)$ can be approximated as $\sin(x) \approx x - \frac 13 x^3$. In particular, $\sin(p) = \sin(\pi - p) \approx (\pi-p) - \frac 13 (\pi - p)^3$. Looking at Shanks’s original setup, we have $p + \sin(p) \approx p + (\pi - p) - \frac 13 (\pi - p)^3$, which simplifies to $\pi - \frac13 (\pi - p)^3$. We know that $\pi - p$ is at most $5 \times 10^{-n-1}$, so $\left|\frac 13 (\pi - p)^3 \right| \lt \frac {125}{3} \times 10^{-3n - 3} < 5\times 10^{-3n-2}$. I reckon the approximation is generally correct to $3n+1$ decimal places ((Perhaps this breaks down in some cases when p’s last digit is 4 or 5, but that is left as an exercise.)).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9710785746574402, "perplexity": 351.28674387714153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154432.2/warc/CC-MAIN-20210803061431-20210803091431-00162.warc.gz"}
https://byjus.com/ncert-solutions-class-11-physics/chapter-14-oscillations/
# NCERT Solutions For Class 11 Physics Chapter 14 : Oscillations ## NCERT Solutions For Class 11 Physics Chapter 14 PDF Free Download NCERT solutions for class 11 physics chapter 14 oscillations is provided here which is one of the most common topics in Class 11 physics examination. The NCERT solutions for class 11 physics chapter 14 oscillations are created by subject experts in accordance with the CBSE physics class 11 syllabus. The NCERT Solutions for class 11 physics chapter 14 is prepared in such a way that students can easily understand the concepts of oscillation. Get the NCERT Solutions for class 11 physics chapter 14 pdf here. ## Class 11 Physics NCERT Solutions for  Chapter 14 Oscillations These NCERT solutions provide you with the answers to the question from the textbook, important questions from previous year question papers and sample papers. The solution comprises of Worksheets, exemplary problems, Short and long answer questions, MCQ’s, tips and tricks to help you understand them thoroughly. NCERT solutions are one of the best tools to prepare physics for class 11. Oscillations and Resonance is a very important chapter in CBSE class 11. Students must prepare this chapter well to score well in their board examination. The NCERT Solutions for Class 11 Physics Oscillations and Resonance is given below so that students can understand the concepts of this chapter in depth. ### Subtopics of class 11 physics chapter 14 oscillations 1. Introduction 2. Periodic and oscillatory motions 3. Simple harmonic motion 4. Simple harmonic motion and uniform circular motion 5. Velocity and acceleration in simple harmonic motion 6. Force law for simple harmonic motion 7. Energy in simple harmonic motion 8. Some systems executing SHM Ex 9. Damped simple harmonic motion 10. Forced oscillations and resonance. ### Important questions of Class 11 Physics Chapter 14 oscillations Q1. Which of the following is periodic motions: (a) A swimmer making to and fro laps in a river. (b) A star revolving around a black hole. (c) A bullet fired from a gun. (d) A freely suspended bar magnet is moved away from its N –S direction and released. ( a ) A swimmer’s motion maybe to and fro but as it lacks a definite period, it is not periodic. ( b ) A star revolving a black hole has a periodic motion as it always returns back to the same position after a definite amount of time. ( c ) A bullet fired from a gun isn’t periodic as it does not return back to the gun. ( d ) When a freely suspended magnet is moved out from its N-S direction its motion will be periodic, because it oscillates about its mean position within definite intervals of time Q2. Identify the ones with simple harmonic motion and the ones with periodic motion but not simple harmonic: ( a ) rotation of Pluto about its axis. ( b ) oscillating mercury column in a u-tube. ( c ) a ball bearing being released from slightly above the bowl’s lowermost point. ( d ) a polyatomic molecule’s general vibration about its equilibrium position . ( a ) It is a periodic motion but not simple harmonic motion because there is no to and fro motion about a fixed point. ( b ) It is simple harmonic motion. ( c ) It is simple harmonic motion. ( d ) A polyatomic molecule’s  general vibration is the superposition of individual simple harmonic motions of a number of different molecules. Hence, it is not simple harmonic, but periodic. Q3. Among the four x-t plots for linear motion of a particle, identify the graphs that represent a periodic motion. Also, find the find the period of motion if it is periodic. ( a ) It is non-periodic. As motion is not repeated. ( b ) As the motion is repeated every 2 seconds, it is a periodic motion. Period = 2s. ( c ) Non-periodic as motion is repeated in one position only. ( d ) As the motion is repeated every 2 seconds, it is a periodic motion. Period = 2s. Q4. Among the given functions of time which are the ones representing: ( i ) simple harmonic, ( ii ) periodic but not simple harmonic motion, and ( iii ) non-periodic motion? Find the period for each case of periodic motion (ω is a positive constant): a) sin ωt – cos ωt b) sin3ωt c) 3 cos (π/4 – 2ωt) d) cos ωt + cos 3ωt + cos 5ωt e) exp (–ω2 t2 ) f) 1 + ωt + ω2 t2 (a) sin ωt – cos ωt = $\sqrt{2}[\frac{1}{\sqrt{2}}sin\omega t – \frac{1}{\sqrt{2}}cos\omega t]$ = $\sqrt{2}[cos\frac{\pi }{4} \times sin\omega t – sin\frac{\pi }{4}\times cos\omega t]$ = $\sqrt{2}sin[\omega t – \frac{\pi }{4}]$ As this can be represented as  asin ( ωt  + Φ ) it represents SIMPLE HARMONIC MOTION Its period is : $\frac{2\pi }{\omega }$ (b) sin3ωt = ¼ [3sin ωt –sin 3ωt] Even though the two sinωt represent simple harmonic motions respectively, but they are periodic because superposition of two SIMPLE HARMONIC MOTION is not simple harmonic. (c) 3 cos (π/4 – 2ωt) = 3 cos (2ωt – π/4) As it can be written as : a sin ( ωt  + Φ) , it represents SIMPLE HARMONIC MOTION Its period is : π/ω (d) In cos ωt + cos 3ωt + cos 5ωt, each cosine function represents SIMPLE HARMONIC MOTION, but the super position of SIMPLE HARMONIC MOTION gives periodic. (e) As it is an exponential function, it is non periodic as it does not repeat itself. ( f ) 1 + ωt + ω2 t2 is non periodic. Q5. A body having linear simple harmonic motion between two points, X and Y, 20 cm apart. If  direction from X  to Y is the positive direction, then  provide signs of acceleration, force, and  velocity on the body when it is ( i ) at  X, ( ii ) at  Y, ( iii ) at the mid-point of XY moving towards X, ( iv ) at 2 cm away from Y moving towards X, ( v ) at 3 cm away from X  moving towards Y, ( vi ) at 4  cm away from Y  moving towards X. ( i ) At X the body with Simple harmonic motion is momentarily at rest, thus its velocity is zero. Force and accelerations are positive as it is directed towards Y from X. ( ii ) At Y velocity is zero, force and accelerations are negative as they are directed towards X from Y. ( iii ) At the midpoint of XY moving towards X, the body has positive velocity but negative acceleration and force. ( iv ) When the body is 2 cm away from Y moving towards X it has positive velocity but negative acceleration and force. ( v ) When the body is 3 cm away from X moving towards Y, it has positive force, acceleration, and velocity. ( vi ) When the body is 4 cm away from Y moving towards X it has positive velocity but negative acceleration and force. Q6. Which among the following relationships between displacement, ‘s’ and acceleration ‘a’ of a body represents simple harmonic motion? a) a = 0.5s b) a = –10s c) a = –10s +5 d) a = 300s3 For the simple harmonic motion  to be present the requisite relation between acceleration and displacement is: a = –k s. Which is being satisfied by the relation in option ( b ). Q7. The motion of a body in simple harmonic motion is given by the displacement function, x (t) = A cos (ωt + φ). Given that at t = 0, the initial velocity of the body is ω cm/s and its initial position is 1 cm, calculate its initial phase angle and amplitude? If in place of the cosine function, a sine function is used to represent the simple harmonic motion: x = B sin (ωt + α), calculate the body’s amplitude and initial phase considering the initial conditions given above. [Angular frequency of the particle is π/ s] Given, Initially, at t = 0: Displacement, x = 1 cm Initial velocity, v = ω cm/sec. Angular frequency, ω = π rad/s It is given that: x(t) = A cos( ωt  + Φ)       . . . . . . . . . . . . . . . . ( i ) 1 = A cos( ω x 0  + Φ) = Acos Φ A cos Φ = 1                           . . . . . . . . . . . . . . . . ( ii ) Velocity, v = dx / dt differentiating equation ( i ) w.r.t  ‘t’ v = – Aωsin ( ωt  + Φ) Now at t = 0; v = ω and => ω = – Aωsin ( ωt  + Φ) 1 = – A sin( ω x 0  + Φ) = -Asin(Φ) Asin(Φ) = – 1         . . . . . . . . . . . . . . . . . . . . . . .  ( iii ) Adding and squaring equations ( ii ) and ( iii ), we get: A2(sin2 Φ + cos2 Φ) = 1 +1 thus, A =$\sqrt{2}$ Dividing equation ( iii ) by ( ii ), we get : tan Φ = -1 Thus, Φ =3π/4 , 7π/4 Now if SIMPLE HARMONIC MOTION is given as : x = B sin( ωt  + α) Putting the given values in the equation , we get : 1 = B sin ( ω x 0  + α) Bsin α = 1             . . . . . . . . . . . . . . . . . . . . ( iv ) Also, velocity ( v ) = ω Bcos (ωt  + α) Substituting the values we get : π = π Bsin α Bsin α = 1              . . . . . . . . . . . . . . . . . . . . ( v ) Adding and squaring equations ( iv ) and ( v ), we get: B2[ sin2 α + cos2 α] =2 Therefore, B = $\sqrt{2}$ Dividing equation ( iv ) by equation ( v ), we get : Bsin α/ Bcos α = 1 tanα =1 = tan (π/4) Therefore, α = π/4 , 5π/4, . . . . . . . Q8. A spring balance has a scale with the range of 0 to 100 kg. An object suspended from this balance, when displaced and released, starts oscillating with a period of 0.6 s. Find the weight of this object? [Length of the scale is 40 cm]. Given, Maximum mass that the scale can read, M = 100 kg. Maximum displacement of the spring = Length of the scale, l = 40 cm = 0.4 m Time period, T = 0.6 s We know, Maximum force exerted on the spring, F = Mg Where, g =  9.8 m/s2 => F = 100 × 9.8 = 980 N Thus, Spring constant, k = F / l k = 980/0.4 = 2450 N/m Now, Let the object have a mass m. We know, Time period, t = 2 π $\sqrt{\frac{m}{k}}$ $m = \left ( \frac{t}{2\pi } \right )^{2}\times k$ $m = \left ( \frac{0.6}{2\times 3.14 } \right )^{2}\times 2450$ Therefore, $m = 22.36 kg$ Thus, the weight of the object is 22.36 x 9.8 = 219.167 N Q9. A spring with a spring constant of 1200 N / m is placed on a horizontal plane as depicted in the figure below. A 6 kg mass is then hooked to the free end of the spring. The mass is then pulled rightwards for 4 cm and released. Calculate ( i ) Oscillation frequency, ( ii ) Maximum acceleration of the mass, and ( iii ) Maximum speed of the mass. Sol: Given, Spring constant, k = 1200 N/m Mass, m = 6 kg Displacement, A = 4.0 cm = 0.04 cm ( i ) Oscillation frequency  v = 1/T Where, T = time period. $v = \frac{1}{2\times \pi }\sqrt{\frac{k}{m}}$ Therefore, $v = \frac{1}{2\times 3.14}\sqrt{\frac{1200}{6}}$ = 2.25 per second ( ii ) Maximum acceleration (a) = ω2 A Where, ω = Angular frequency = $\sqrt{\frac{k}{m}}$ A = maximum displacement Therefore, a = A( k/m ) a  = 0.04 x (1200/6 ) = 8 m s2 ( iii ) Maximum velocity, VMAX = ω A = 0.04 X $\sqrt{\frac{1200}{6}}$ Therefore, VMAX = 0.56 m / s Q10. In the above question let us consider the position of mass when the spring is relaxed as x = 0, and the left to right direction as the positive direction of the x-axis. Provide x as a function of time t for the oscillating mass, if at the moment we start the stopwatch (t = 0), the mass is: ( i ) at the mean position, ( ii ) at the maximum stretched position, and ( iii ) at the maximum compressed position. Sol: Given, Spring constant, k = 1200 N/m Mass, m = 6 kg Displacement, A = 4.0 cm = 0.04 cm ω = 14.14 s-1 ( i )Since time is measured from mean position, x = A sin ω t x = 4 sin 14.14t ( ii ) At the maximum stretched position, the mass has an initial phase of π/2 rad. Then, x = A sin( ωt + π/2  ) = A cos ωt = 4 cos 14.14t ( iii ) At the maximum compressed position, the mass is at its leftmost position with an initial phase of 3π/2 rad. Then, x = A sin( ωt + 3π/2  ) = -4 cos14.14 t Q11. Following diagrams represent two circular motions. The revolution period, the radius of the circle, the sense of revolution (i.e. clockwise or anticlockwise) and the initial position, are indicated on each diagram. Find the corresponding simple harmonic motions of the x-projection of the radius vector of the revolving body B, in each situation. Sol: ( 1 ) For time period, T = 4 s Amplitude, A = 3 cm At time, t = 0, the radius vector OB makes an angle π/2   with the positive x-axis, i.e., Phase angel Φ = + π/2 Therfore, the equation of simple harmonic motion for the x-projection of OB, at time t is: x = A cos [ 2πt/T  + Φ ] = 3 cos [2πt/4 + π/2 ] = -3sin ( πt/2 ) = -3sin ( πt/2 ) cm ( 2 ) Time period, T = 8 s Amplitude, A = 2 m At time t = 0, OB makes an angle π with the x-axis, in the anticlockwise direction. Thus, phase angle, Φ = + π Therefore, the equation of simple harmonic motion for the x-projection of OB, at time t is: x = A cos [ 2πt/T  + Φ ] = 2 cos [ 2πt/8  + π  ] = -2 cos ( πt/4 ) Q12. Draw the corresponding reference circle for each of the following simple harmonic motions. Mention the initial (t = 0) position of the body, the angular speed of the rotating body and the radius of the circle. For simplification, consider the sense of rotation to be anticlockwise in each case: [ x is in cm and t is in s]. ( i ) x = –2 sin (3t  +  π/3) ( ii ) x = cos (π/6  –  t ) ( iii ) x = 3 sin (2πt  +  π/4) ( iv ) x = 2 cos πt Sol: ( a ) x = –2 sin (3t  +  π/3) = 2cos( 3t + π/3 + π/2) = 2 cos ( 3t  +  5π/6 ) On comparing this equation with the standard equation for  Simple harmonic motion: x = A cos [ 2πt/T  + Φ ] We get, Amplitude = 2 cm Phase angle =  5π/6 = 1500 Angular velocity= ω = 2πt/T = 3 rad/sec Thus the corresponding reference circle for motion of this body is  as : ( ii ) x = cos (π/6  –  t ) = cos ( t – π/6 ) On comparing this equation with the standard equation for  Simple harmonic motion ; x = A cos [ 2πt/T  + Φ ] We get, Amplitude = 1 cm Phase angle =  -π/6 =  -300 Angular velocity= ω = 2πt/T = 1 rad/sec Thus the corresponding reference circle for motion of this body is  as: ( iii ) x = 3 sin (2πt  +  π/4) = – 3 cos [(2πt  +  π/4) + π/2 ] = -3cos(2πt  +  3π/4 ) On comparing this equation with the standard equation for  Simple harmonic motion ; x = A cos [ 2πt/T  + Φ ] We get; Amplitude = 3 cm Phase angle = 3π/4 =  1350 Angular velocity= ω = 2πt/T = 2 rad/sec Thus the corresponding reference circle for motion of this body is  as: ( iv ) x = 2 cos πt On comparing this equation with the standard equation for  Simple harmonic motion: x = A cos [ 2πt/T  + Φ ] We get, Amplitude = 2 cm Phase angle =  00 Angular velocity= ω = π rad/sec Thus the corresponding reference circle for motion of this body is  as: Q13. Figure below ( a ) depicts a spring of force constant k attached at one end and a block of  mass, ‘m’ clamped to its free end. A force F is applied on the free end of the spring to stretch it. Figure ( b ) depicts the same spring with both of its ends clamped to blocks of mass m.  Both of the ends of the spring in figure ( b ) is stretched by the same force F. Find ( i ) the spring’s maximum extension in both the cases. ( ii ) If the block in figure ( a ) and the two blocks in figure  ( b ) are released, what would the oscillation period be in the two cases? Sol: For the one block system: When force F, is applied to the free end of the spring, there is an extension. ( i )For the maximum extension, we know: F = kl Where, k is the spring constant. Thus, the maximum extension produced in the spring ,l = F/k For the two block system: The displacement (x) produced in this case is: x = l/2 Net force, F = 2 k x          [ l =2x] => F = 2k (l/2) Therefore, l = F / k ( ii ) For one block system : Force on the block of mass m , F = ma = $\frac{md^{2}x}{dt^{2}}$ Where, x is the displacement of the block in time t Therefore,  $\frac{md^{2}x}{dt^{2}}$ = -kx The negative sign is present as the direction of the elastic force is opposite to the direction of the displacement. $\frac{d^{2}x}{dt^{2}}$ = – x (k/m) = – ω2 x Where, ω2 =k/m ω is the angular frequency of oscillation. Therefore, time period of oscillation, T = 2π/ω = 2π(m/k )1/2 For two block system : F = ma = $\frac{md^{2}x}{dt^{2}}$ $\frac{md^{2}x}{dt^{2}}$ = -2kx The negative sign is present as the direction of the elastic force is opposite to the direction of the displacement. $\frac{d^{2}x}{dt^{2}}$ = – 2x (k/m) = – ω2 x Where, ω2 =2k/m Where, ω is the angular frequency of oscillation. Therefore, time period of oscillation, T = 2π/ω = 2π (m/2k)1/2 Q14. The piston in the cylinder head of a V8 engine has a stroke (twice the amplitude) of 2.0 m. Given that the piston moves with simple harmonic motion at an angular frequency of 400 rad/min, find its maximum speed? Sol: Given, The angular frequency of the piston, ω = 400 rad/ min. Stroke = 2.0 m Amplitude, A = 2/2 = 1 m Thus, maximum speed VMAX =A ω = 400 m/ min Q15. On the surface of the moon, the value acceleration due to gravity is 1.7 ms-2. Calculate the time period of a simple pendulum on the lunar surface if its time period on earth is 7 s? Sol: Given, Acceleration due to gravity on the surface of moon, g’ = 1.7 m s-2 Acceleration due to gravity on the surface of earth, g = 9.8 m s-2 Time period of a simple pendulum on earth, T = 7s We know, $T = 2\pi \sqrt{\frac{l}{g}}$ Where, l is the length of the pendulum $l = \frac{T^{2}}{(2\pi )^{2}}\times 9.8$ = 12.17m On the moon’s surface, time period T’ =$2\pi \sqrt{\frac{l}{g’}}\\$ Therefore, $T’ =2\pi \sqrt{\frac{12.17}{1.7’}}$ = 16.82 s Q16. ( i ) The time period of a body having simple harmonic motion depends on the mass m of the body and the force constant k: T =$2\pi \sqrt{\frac{m}{k}}$ A simple pendulum exhibits simple harmonic motion. Then why does the time period of a pendulum not depend upon its mass? ( ii ) For small angle oscillations, a simple pendulum exhibits simple harmonic motion ( more or less). For larger angles of oscillation, detailed analysis show that T is greater than $2\pi \sqrt{\frac{l}{g}}$. Explain. ( iii ) A boy with a wristwatch on his hand jumps from a helicopter. Will the wrist watch give the correct time during free fall? ( iv ) Find the frequency of oscillation of a simple pendulum that is free falling from a tall bridge. Sol: ( i ) The time period of a simple pendulum, T =$2\pi \sqrt{\frac{m}{k}}$ For a simple pendulum, k is expressed in terms of mass m, as: k ∝ m m/k = constant Thus, the time period T, of a simple pendulum is independent of its mass. ( ii ) In the case of a simple pendulum, the restoring force acting on the bob of the pendulum is: F = –mg sinθ Where, F = Restoring force m = Mass of the bob g = Acceleration due to gravity θ = Angle of displacement For small θ, sin θ ∼ θ For large θ, sin θ is greater than θ. This decreases the effective value of g. Thus, the time period increases as: T =$2\pi \sqrt{\frac{l}{g’}}$ ( iii ) As the working of a wrist watch does not depend upon the acceleration due to gravity, the time shown by it will be correct during free fall. ( iv ) As acceleration due to gravity is zero during free fall, the frequency of oscillation will also be zero. Q17. A simple pendulum with a bob of mass M and a pendulum of length l is suspended in a car. The car then moves on a circular track of radius R at a constant velocity v. If the pendulum makes small oscillations in a radial direction from its equilibrium position, find its time period. Sol: The bob of the simple pendulum will experience centripetal acceleration because by the circular motion of the car and the acceleration due to gravity. Acceleration due to gravity = g Centripetal acceleration = v2 / R Where, v is the uniform speed of the car R is the radius of the track Effective acceleration ( aaeff ) is given as : aaeff = $\sqrt{g^{2}+(\frac{v^{2}}{R})^{2}}$ Time period, T = $2\pi \sqrt{\frac{l}{ a_{aeff }}}$ Where, l is the length of the pendulum Therefore, Time period T = $2\pi \sqrt{\frac{l}{\sqrt{g^{2}+\frac{v^{4}}{R^{2}}}}}$ Q18. A cylindrical cork of base area A, height h and of density ρ floats in a liquid of density ρ1. The cork is pushed into the liquid slightly and then released. Prove that the cork oscillates up and down with simple harmonic motion at a period of T = $2\pi \sqrt{\frac{h\rho }{\rho _{1}g}}$ Sol: Given, Base area of the cork = A Height of the cork = h Density of the liquid = ρ1 Density of the cork = ρ In equilibrium: The weight of the cork = Weight of the liquid displaced by the floating cork. Let the cork be dipped slightly by x. As a result, some excess water of a certain volume is displaced. Thus, an extra up-thrust provides the restoring force to the cork. Up-thrust = Restoring force, F = Weight of the extra water displaced F = – (Volume × Density × g) Volume = Area × Distance through which the cork is depressed Volume = Ax F = – A x ρ1 g     . .  . . . . . . . . . . . . . . . . .  ( i ) According to the force law: F = kx k = F/x Where, k is  a constant. k = F/x = – A x ρ1 g           . .  . . . . . . . . . . . . . . . . .  ( ii ) The time period of oscillations of the cork : T =$2\pi \sqrt{\frac{m}{ k }}$ . . . . . . . ( iii ) Where, m = mass of the cork = Volume x density = Base area of the cork × Height of the cork × Density of the cork = Ahρ Therefore, the expression for the time period becomes : T = $2\pi \sqrt{\frac{h\rho }{\rho _{1}g}}$     . . . . . [ Proved ] Q19. In a U-tube holding liquid mercury one end is open to the atmosphere and the other end is connected to a suction pump. If some pressure difference is maintained between the two columns. Prove that when the pump is removed, the mercury column in the U-tube exhibits simple harmonic motion. Sol: Area of cross-section of the U-tube = A Density of the mercury column = ρ Acceleration due to gravity = g Restoring force, F = Weight of the column of mercury of a certain height F = –(Volume × Density × g) F = –(A × 2h × ρ ×g) = –2Aρgh = –k × Displacement in one arm (h) Where, 2h is the height of the mercury column in the two arms k is a constant, given by, k = -F/h = 2Aρg Time period, T =  $2\pi \sqrt{\frac{m }{k }}$  = $2\pi \sqrt{\frac{m }{2A\rho g}}$ Where, m, is the mass of the mercury column Let, l be the length of the total mercury in the U-tube. Mass of mercury, m = Volume of mercury × Density of mercury = Alρ Therefore, T = $2\pi \sqrt{\frac{Al\rho }{2A\rho g}}$ i.e. T = $2\pi \sqrt{\frac{l }{2 g}}$ Thus, the mercury column exhibits SIMPLE HARMONIC MOTION with a time period of $2\pi \sqrt{\frac{l }{2 g}}$ In our daily life, we encounter various kinds of motions. The study of oscillatory motion is basic to physics. Oscillations and Resonance is one of the most interesting chapters. Students will get to know about periodic motion, period, Simple harmonic motion, and other related concepts from this chapter. Some important concepts and key points of Oscillations and Resonance is given below. • The period T is the least time after which motion repeats itself. Thus, motion repeats itself after nT where n is an integer. • Every periodic motion is not SHM. Only that periodic motion governed by the force law F = – k x is simple harmonic. • The motion of a simple pendulum is simple harmonic for small angular displacement • Under forced oscillation, the phase of harmonic motion of the particle differs from the phase of the driving force. Also Access NCERT Exemplar for class 11 Physics Chapter 14 CBSE Notes for class 11 Physics Chapter 14 Students are suggested to get well versed with the concepts in this chapter to score well in their board examination. The concepts in this topic will also be useful for students when they will prepare for competitive entrance examination like JEE and NEET. Students must follow some strategies while preparing for their physics exam. Some effective strategies to prepare is given below. • Students must know the syllabus completely during their preparation. It will help them to understand the pattern of the exam. • They must follow a timetable during their preparation. • Students must thoroughly follow the NCERT books while preparing. • Making notes is one of the best ways to retain the concepts for a longer time • Practice lots of question papers and sample papers. BYJU’S bring you the best study materials, notes, sample papers, important questions, MCQ’s, NCERT Books, tips and tricks that help you to face class 11 examination confidently and score handsomely. BYJU’S videos and animations help you to remember the concepts for a long period. Related Links Ncert Solution Of Class 7Th Maths Ncert 8Th Maths Answers Of Maths Ncert Class 9 Answers Of Maths Ncert Class 10 Cbse Class 12 Ncert Solutions Ncert 7Th Science Answers Of Science Ncert Class 8 Ncert 9 Science Cbse Ncert Solutions For Class 10 Science Bio Ncert Class 12 Ncert 12 Physics Ncert Solution For Class 12Th Chemistry
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8810319304466248, "perplexity": 1721.6943651288889}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668334.27/warc/CC-MAIN-20191114081021-20191114105021-00308.warc.gz"}
https://en.m.wikibooks.org/wiki/Linear_Algebra/Changing_Representations_of_Vectors
# Linear Algebra/Changing Representations of Vectors Linear Algebra ← Change of Basis Changing Representations of Vectors Changing Map Representations → In converting ${\displaystyle {\rm {Rep}}_{B}({\vec {v}})}$ to ${\displaystyle {\rm {Rep}}_{D}({\vec {v}})}$ the underlying vector ${\displaystyle {\vec {v}}}$ doesn't change. Thus, this translation is accomplished by the identity map on the space, described so that the domain space vectors are represented with respect to ${\displaystyle B}$ and the codomain space vectors are represented with respect to ${\displaystyle D}$. (The diagram is vertical to fit with the ones in the next subsection.) Definition 1.1 The change of basis matrixfor bases ${\displaystyle B,D\subset V}$ is the representation of the identity map ${\displaystyle {\mbox{id}}:V\to V}$ with respect to those bases. ${\displaystyle {\rm {Rep}}_{B,D}({\mbox{id}})=\left({\begin{array}{c|c|c}\vdots &&\vdots \\{\rm {Rep}}_{D}({\vec {\beta }}_{1})&\;\cdots \;&{\rm {Rep}}_{D}({\vec {\beta }}_{n})\\\vdots &&\vdots \end{array}}\right)}$ Lemma 1.2 Left-multiplication by the change of basis matrix for ${\displaystyle B,D}$ converts a representation with respect to ${\displaystyle B}$ to one with respect to ${\displaystyle D}$. Conversly, if left-multiplication by a matrix changes bases ${\displaystyle M\cdot {\rm {Rep}}_{B}({\vec {v}})={\rm {Rep}}_{D}({\vec {v}})}$ then ${\displaystyle M}$ is a change of basis matrix. Proof For the first sentence, for each ${\displaystyle {\vec {v}}}$, as matrix-vector multiplication represents a map application, ${\displaystyle {\rm {Rep}}_{B,D}({\mbox{id}})\cdot {\rm {Rep}}_{B}({\vec {v}})={\rm {Rep}}_{D}(\,{\mbox{id}}({\vec {v}})\,)={\rm {Rep}}_{D}({\vec {v}})}$. For the second sentence, with respect to ${\displaystyle B,D}$ the matrix ${\displaystyle M}$ represents some linear map, whose action is ${\displaystyle {\vec {v}}\mapsto {\vec {v}}}$, and is therefore the identity map. Example 1.3 With these bases for ${\displaystyle \mathbb {R} ^{2}}$, ${\displaystyle B=\langle {\begin{pmatrix}2\\1\end{pmatrix}},{\begin{pmatrix}1\\0\end{pmatrix}}\rangle \qquad D=\langle {\begin{pmatrix}-1\\1\end{pmatrix}},{\begin{pmatrix}1\\1\end{pmatrix}}\rangle }$ because ${\displaystyle {\rm {Rep}}_{D}(\,{\mbox{id}}({\begin{pmatrix}2\\1\end{pmatrix}}))={\begin{pmatrix}-1/2\\3/2\end{pmatrix}}_{D}\qquad {\rm {Rep}}_{D}(\,{\mbox{id}}({\begin{pmatrix}1\\0\end{pmatrix}}))={\begin{pmatrix}-1/2\\1/2\end{pmatrix}}_{D}}$ the change of basis matrix is this. ${\displaystyle {\rm {Rep}}_{B,D}({\rm {id)={\begin{pmatrix}-1/2&-1/2\\3/2&1/2\end{pmatrix}}}}}$ We can see this matrix at work by finding the two representations of ${\displaystyle {\vec {e}}_{2}}$ ${\displaystyle {\rm {Rep}}_{B}({\begin{pmatrix}0\\1\end{pmatrix}})={\begin{pmatrix}1\\-2\end{pmatrix}}\qquad {\rm {Rep}}_{D}({\begin{pmatrix}0\\1\end{pmatrix}})={\begin{pmatrix}1/2\\1/2\end{pmatrix}}}$ and checking that the conversion goes as expected. ${\displaystyle {\begin{pmatrix}-1/2&-1/2\\3/2&1/2\end{pmatrix}}{\begin{pmatrix}1\\-2\end{pmatrix}}={\begin{pmatrix}1/2\\1/2\end{pmatrix}}}$ We finish this subsection by recognizing that the change of basis matrices are familiar. Lemma 1.4 A matrix changes bases if and only if it is nonsingular. Proof For one direction, if left-multiplication by a matrix changes bases then the matrix represents an invertible function, simply because the function is inverted by changing the bases back. Such a matrix is itself invertible, and so nonsingular. To finish, we will show that any nonsingular matrix ${\displaystyle M}$ performs a change of basis operation from any given starting basis ${\displaystyle B}$ to some ending basis. Because the matrix is nonsingular, it will Gauss-Jordan reduce to the identity, so there are elementatry reduction matrices such that ${\displaystyle R_{r}\cdots R_{1}\cdot M=I}$. Elementary matrices are invertible and their inverses are also elementary, so multiplying from the left first by ${\displaystyle {R_{r}}^{-1}}$, then by ${\displaystyle {R_{r-1}}^{-1}}$, etc., gives ${\displaystyle M}$ as a product of elementary matrices ${\displaystyle M={R_{1}}^{-1}\cdots {R_{r}}^{-1}}$. Thus, we will be done if we show that elementary matrices change a given basis to another basis, for then ${\displaystyle {R_{r}}^{-1}}$ changes ${\displaystyle B}$ to some other basis ${\displaystyle B_{r}}$, and ${\displaystyle {R_{r-1}}^{-1}}$ changes ${\displaystyle B_{r}}$ to some ${\displaystyle B_{r-1}}$, ..., and the net effect is that ${\displaystyle M}$ changes ${\displaystyle B}$ to ${\displaystyle B_{1}}$. We will prove this about elementary matrices by covering the three types as separate cases. Applying a row-multiplication matrix ${\displaystyle M_{i}(k){\begin{pmatrix}c_{1}\\\vdots \\c_{i}\\\vdots \\c_{n}\end{pmatrix}}={\begin{pmatrix}c_{1}\\\vdots \\kc_{i}\\\vdots \\c_{n}\end{pmatrix}}}$ changes a representation with respect to ${\displaystyle \langle {\vec {\beta }}_{1},\dots ,{\vec {\beta }}_{i},\dots ,{\vec {\beta }}_{n}\rangle }$ to one with respect to ${\displaystyle \langle {\vec {\beta }}_{1},\dots ,(1/k){\vec {\beta }}_{i},\dots ,{\vec {\beta }}_{n}\rangle }$ in this way. ${\displaystyle {\vec {v}}=c_{1}\cdot {\vec {\beta }}_{1}+\dots +c_{i}\cdot {\vec {\beta }}_{i}+\dots +c_{n}\cdot {\vec {\beta }}_{n}}$ ${\displaystyle \mapsto \;c_{1}\cdot {\vec {\beta }}_{1}+\dots +kc_{i}\cdot (1/k){\vec {\beta }}_{i}+\dots +c_{n}\cdot {\vec {\beta }}_{n}={\vec {v}}}$ Similarly, left-multiplication by a row-swap matrix ${\displaystyle P_{i,j}}$ changes a representation with respect to the basis ${\displaystyle \langle {\vec {\beta }}_{1},\dots ,{\vec {\beta }}_{i},\dots ,{\vec {\beta }}_{j},\dots ,{\vec {\beta }}_{n}\rangle }$ into one with respect to the basis ${\displaystyle \langle {\vec {\beta }}_{1},\dots ,{\vec {\beta }}_{j},\dots ,{\vec {\beta }}_{i},\dots ,{\vec {\beta }}_{n}\rangle }$ in this way. ${\displaystyle {\vec {v}}=c_{1}\cdot {\vec {\beta }}_{1}+\dots +c_{i}\cdot {\vec {\beta }}_{i}+\dots +c_{j}{\vec {\beta }}_{j}+\dots +c_{n}\cdot {\vec {\beta }}_{n}}$ ${\displaystyle \mapsto \;c_{1}\cdot {\vec {\beta }}_{1}+\dots +c_{j}\cdot {\vec {\beta }}_{j}+\dots +c_{i}\cdot {\vec {\beta }}_{i}+\dots +c_{n}\cdot {\vec {\beta }}_{n}={\vec {v}}}$ And, a representation with respect to ${\displaystyle \langle {\vec {\beta }}_{1},\dots ,{\vec {\beta }}_{i},\dots ,{\vec {\beta }}_{j},\dots ,{\vec {\beta }}_{n}\rangle }$ changes via left-multiplication by a row-combination matrix ${\displaystyle C_{i,j}(k)}$ into a representation with respect to ${\displaystyle \langle {\vec {\beta }}_{1},\dots ,{\vec {\beta }}_{i}-k{\vec {\beta }}_{j},\dots ,{\vec {\beta }}_{j},\dots ,{\vec {\beta }}_{n}\rangle }$ ${\displaystyle {\vec {v}}=c_{1}\cdot {\vec {\beta }}_{1}+\dots +c_{i}\cdot {\vec {\beta }}_{i}+c_{j}{\vec {\beta }}_{j}+\dots +c_{n}\cdot {\vec {\beta }}_{n}}$ ${\displaystyle \mapsto \;c_{1}\cdot {\vec {\beta }}_{1}+\dots +c_{i}\cdot ({\vec {\beta }}_{i}-k{\vec {\beta }}_{j})+\dots +(kc_{i}+c_{j})\cdot {\vec {\beta }}_{j}+\dots +c_{n}\cdot {\vec {\beta }}_{n}={\vec {v}}}$ (the definition of reduction matrices specifies that ${\displaystyle i\neq k}$ and ${\displaystyle k\neq 0}$ and so this last one is a basis). Corollary 1.5 A matrix is nonsingular if and only if it represents the identity map with respect to some pair of bases. In the next subsection we will see how to translate among representations of maps, that is, how to change ${\displaystyle {\rm {Rep}}_{B,D}(h)}$ to ${\displaystyle {\rm {Rep}}_{{\hat {B}},{\hat {D}}}(h)}$. The above corollary is a special case of this, where the domain and range are the same space, and where the map is the identity map. ## Exercises This exercise is recommended for all readers. Problem 1 In ${\displaystyle \mathbb {R} ^{2}}$ , where ${\displaystyle D=\langle {\begin{pmatrix}2\\1\end{pmatrix}},{\begin{pmatrix}-2\\4\end{pmatrix}}\rangle }$ find the change of basis matrices from ${\displaystyle D}$  to ${\displaystyle {\mathcal {E}}_{2}}$  and from ${\displaystyle {\mathcal {E}}_{2}}$  to ${\displaystyle D}$ . Multiply the two. This exercise is recommended for all readers. Problem 2 Find the change of basis matrix for ${\displaystyle B,D\subseteq \mathbb {R} ^{2}}$ . 1. ${\displaystyle B={\mathcal {E}}_{2}}$ , ${\displaystyle D=\langle {\vec {e}}_{2},{\vec {e}}_{1}\rangle }$ 2. ${\displaystyle B={\mathcal {E}}_{2}}$ , ${\displaystyle D=\langle {\begin{pmatrix}1\\2\end{pmatrix}},{\begin{pmatrix}1\\4\end{pmatrix}}\rangle }$ 3. ${\displaystyle B=\langle {\begin{pmatrix}1\\2\end{pmatrix}},{\begin{pmatrix}1\\4\end{pmatrix}}\rangle }$ , ${\displaystyle D={\mathcal {E}}_{2}}$ 4. ${\displaystyle B=\langle {\begin{pmatrix}-1\\1\end{pmatrix}},{\begin{pmatrix}2\\2\end{pmatrix}}\rangle }$ , ${\displaystyle D=\langle {\begin{pmatrix}0\\4\end{pmatrix}},{\begin{pmatrix}1\\3\end{pmatrix}}\rangle }$ Problem 3 For the bases in Problem 2, find the change of basis matrix in the other direction, from ${\displaystyle D}$  to ${\displaystyle B}$ . This exercise is recommended for all readers. Problem 4 Find the change of basis matrix for each ${\displaystyle B,D\subseteq {\mathcal {P}}_{2}}$ . 1. ${\displaystyle B=\langle 1,x,x^{2}\rangle ,D=\langle x^{2},1,x\rangle }$ 2. ${\displaystyle B=\langle 1,x,x^{2}\rangle ,D=\langle 1,1+x,1+x+x^{2}\rangle }$ 3. ${\displaystyle B=\langle 2,2x,x^{2}\rangle ,D=\langle 1+x^{2},1-x^{2},x+x^{2}\rangle }$ This exercise is recommended for all readers. Problem 5 Decide if each changes bases on ${\displaystyle \mathbb {R} ^{2}}$ . To what basis is ${\displaystyle {\mathcal {E}}_{2}}$  changed? 1. ${\displaystyle {\begin{pmatrix}5&0\\0&4\end{pmatrix}}}$ 2. ${\displaystyle {\begin{pmatrix}2&1\\3&1\end{pmatrix}}}$ 3. ${\displaystyle {\begin{pmatrix}-1&4\\2&-8\end{pmatrix}}}$ 4. ${\displaystyle {\begin{pmatrix}1&-1\\1&1\end{pmatrix}}}$ Problem 6 Find bases such that this matrix represents the identity map with respect to those bases. ${\displaystyle {\begin{pmatrix}3&1&4\\2&-1&1\\0&0&4\end{pmatrix}}}$ Problem 7 Conside the vector space of real-valued functions with basis ${\displaystyle \langle \sin(x),\cos(x)\rangle }$ . Show that ${\displaystyle \langle 2\sin(x)+\cos(x),3\cos(x)\rangle }$  is also a basis for this space. Find the change of basis matrix in each direction. Problem 8 Where does this matrix ${\displaystyle {\begin{pmatrix}\cos(2\theta )&\sin(2\theta )\\\sin(2\theta )&-\cos(2\theta )\end{pmatrix}}}$ send the standard basis for ${\displaystyle \mathbb {R} ^{2}}$ ? Any other bases? Hint. Consider the inverse. This exercise is recommended for all readers. Problem 9 What is the change of basis matrix with respect to ${\displaystyle B,B}$ ? Problem 10 Prove that a matrix changes bases if and only if it is invertible. Problem 11 Finish the proof of Lemma 1.4. This exercise is recommended for all readers. Problem 12 Let ${\displaystyle H}$  be a ${\displaystyle n\!\times \!n}$  nonsingular matrix. What basis of ${\displaystyle \mathbb {R} ^{n}}$  does ${\displaystyle H}$  change to the standard basis? This exercise is recommended for all readers. Problem 13 1. In ${\displaystyle {\mathcal {P}}_{3}}$  with basis ${\displaystyle B=\langle 1+x,1-x,x^{2}+x^{3},x^{2}-x^{3}\rangle }$  we have this represenatation. ${\displaystyle {\rm {Rep}}_{B}(1-x+3x^{2}-x^{3})={\begin{pmatrix}0\\1\\1\\2\end{pmatrix}}_{B}}$ Find a basis ${\displaystyle D}$  giving this different representation for the same polynomial. ${\displaystyle {\rm {Rep}}_{D}(1-x+3x^{2}-x^{3})={\begin{pmatrix}1\\0\\2\\0\end{pmatrix}}_{D}}$ 2. State and prove that any nonzero vector representation can be changed to any other. Hint. The proof of Lemma 1.4 is constructive— it not only says the bases change, it shows how they change. Problem 14 Let ${\displaystyle V,W}$  be vector spaces, and let ${\displaystyle B,{\hat {B}}}$  be bases for ${\displaystyle V}$  and ${\displaystyle D,{\hat {D}}}$  be bases for ${\displaystyle W}$ . Where ${\displaystyle h:V\to W}$  is linear, find a formula relating ${\displaystyle {\rm {Rep}}_{B,D}(h)}$  to ${\displaystyle {\rm {Rep}}_{{\hat {B}},{\hat {D}}}(h)}$ . This exercise is recommended for all readers. Problem 15 Show that the columns of an ${\displaystyle n\!\times \!n}$  change of basis matrix form a basis for ${\displaystyle \mathbb {R} ^{n}}$ . Do all bases appear in that way: can the vectors from any ${\displaystyle \mathbb {R} ^{n}}$  basis make the columns of a change of basis matrix? This exercise is recommended for all readers. Problem 16 Find a matrix having this effect. ${\displaystyle {\begin{pmatrix}1\\3\end{pmatrix}}\;\mapsto \;{\begin{pmatrix}4\\-1\end{pmatrix}}}$ That is, find a ${\displaystyle M}$  that left-multiplies the starting vector to yield the ending vector. Is there a matrix having these two effects? 1. ${\displaystyle {\begin{pmatrix}1\\3\end{pmatrix}}\mapsto {\begin{pmatrix}1\\1\end{pmatrix}}\quad {\begin{pmatrix}2\\-1\end{pmatrix}}\mapsto {\begin{pmatrix}-1\\-1\end{pmatrix}}}$ 2. ${\displaystyle {\begin{pmatrix}1\\3\end{pmatrix}}\mapsto {\begin{pmatrix}1\\1\end{pmatrix}}\quad {\begin{pmatrix}2\\6\end{pmatrix}}\mapsto {\begin{pmatrix}-1\\-1\end{pmatrix}}}$ Give a necessary and sufficient condition for there to be a matrix such that ${\displaystyle {\vec {v}}_{1}\mapsto {\vec {w}}_{1}}$  and ${\displaystyle {\vec {v}}_{2}\mapsto {\vec {w}}_{2}}$ . Linear Algebra ← Change of Basis Changing Representations of Vectors Changing Map Representations →
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 119, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9785069227218628, "perplexity": 520.4983360505607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038476606.60/warc/CC-MAIN-20210418103545-20210418133545-00156.warc.gz"}
https://www.flyingcoloursmaths.co.uk/an-insight-into-the-mathematical-mind-box-muller-transforms/
While writing an obituary for George Box, I stumbled on something I thought was ingenious: a method for generating independent pairs of numbers drawn from the normal distribution. I’ll concede: that’s not necessarily something that makes the average reader-in-the-street stop in their tracks and say “Wow!” In honesty, it would probably make the average reader-in-the-street rapidly become a reader-on-the-other-side-of-the-street. However, I thought an article on it might provide some insight into two mathematical minds: that of George Box, one of the greatest ((If not the greatest)) statisticians of the 20th century, and that of me, possibly the greatest mathematical hack of the 21st. ### How the Box-Muller transform works If you want to apply the Box-Muller transform, you need two numbers drawn from a uniform distribution - so they’re equally likely to take on any value between 0 and 1. Let’s call these numbers $U$ and $V$. Box and Muller claim that if you work out $$X = \\sqrt{-2 \\ln (U)} \\cos (2\\pi V)$$ and $$Y = \\sqrt{-2 \\ln (U)} \\sin (2\\pi V)$$ then $X$ and $Y$ are independent (information about one tells you nothing about the other) and normally distributed with a mean of 0 and a standard deviation of 1. I’m not going to prove that, because I don’t know how, but I can explain what’s happening. There’s a hint in my choices of letter: you might recognise that you could simplify these down to $X = R \cos(\theta)$ and $Y = R\sin(\theta)$, which are just the sides of a triangle. The $R = \sqrt{-2\ln(U)}$ is the distance from $(0,0)$ - because $U$ is between 0 and 1, $\ln(U)$ is anywhere from $-\infty$ to 0 ((It’s exponentially distributed, since you ask.)) Multiplying by -2 turns it into a nice positive number (so you can take its square root really) and tends to reduce the distance from the origin. For normally-distributed variables, you want the distances to clump up in the middle; that’s what the 2 is for. The $\theta = 2\pi V$ is much simpler: it just says ‘move in a random direction’. ### What Colin did next My immediate thought was, ‘I wonder if I can use that to work out the probability tables for $z$-scores you get in formula books!’ What do you mean, that wasn’t your immediate thought? ((Weirdo!)) Long story short: the answer is no; I just wanted to show you my thought process and that not everything in maths works out as neatly as you’d like. My insight was that the probability of generating an $X$ value smaller than some constant $k$ would be the same as the probability of generating $U$ and $V$ values that gave smaller $X$s. So far so obvious! In that case, it’s just a case of rearranging the formulas to get expressions for (say) $V$ in terms of $U$ and integrating to find the appropriate area. So I tried that: $\\sqrt{-2 \\ln (U)} \\cos(2\\pi V) = k \\\\ \\cos(2\\pi V) = \\sqrt{ \\frac{k^2}{-2\\ln(U)}} \\\\ V = \\frac{1}{2\\pi}\\cos^{-1}\\left( \\sqrt{ \\frac{k^2}{-2\\ln(U)}} \\right)$ Yikes. I don’t fancy trying to integrate that - the arccos is bad enough, but the $\ln(U)$ on the bottom? Forget about it. Let’s try the other way: $\\sqrt{-2 \\ln (U)} \\cos(2\\pi V) = k \\\\ -2\\ln(U) = k^2 \\sec^2(2\\pi V) \\\\ U = e^{-\\frac{k^2}{2}\\sec^2(2\\pi V)}$ Curses! I don’t think that’s going to work, either. $e^{\sec^2 x}$ isn’t an integral I know how to do - so I’m stymied. Back to the drawing board, I’m afraid - this time, I didn’t get the cookie of a new maths discovery; the difference between a poor mathematician and a decent mathematician is that a poor mathematician says “I got it wrong, I’m rubbish;” the decent mathematician says either “ah well. Next puzzle!” or “ah well! Try again.” The great mathematicians, of course, see right to the end of the puzzle before they start.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9008126258850098, "perplexity": 423.65340098772424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587719.64/warc/CC-MAIN-20211025154225-20211025184225-00199.warc.gz"}
https://math.meta.stackexchange.com/questions/12540/answers-with-0-votes/12542
# Answers with $0$ votes. Is there any good to keep the your answers with $0$ votes or it's better to delete them? I've answered some questions lately and some are left without a comment or anything. What's your experience with this issue? • This happens sometimes. One answer of mine got cited in an arXiv preprint — but hasn't received any upvotes anyway. – Grigory M Jan 21 '14 at 20:52 • Wow! could you provide an arXive link of such incident? – Spock Jan 21 '14 at 21:00 • @Spock: for a slightly different case, see reference 4 in shameless advertisement. Of course I did upvote the answer, so that's slightly different from Grigory's scenario. – Willie Wong Jan 22 '14 at 8:09 • @WillieWong Apparently, the list of references is not available in the free version. – Tobias Kildetoft Jan 22 '14 at 10:00 • @GrigoryM: That (not receiving upvotes) was because it was not really an answer (I still ends with "I'll try to fix it"), although it was a nice and ultimately fruitful idea. But of course I was glad to have read the attempted answer; I might upvote it just for that. By the way the preprint has now been published. – Marc van Leeuwen Jan 30 '14 at 10:39 • Dear Marc, I haven't meant you had done anything wrong — and glad that my [unsuccessful] attempt somehow helped you. I just thought that the story illustrates the point «0-upvoted answers are [somewhat] useful sometimes» well enough (and sounded funny enough) to share it. – Grigory M Jan 30 '14 at 11:37 • @GrigoryM: I'm glad you shared it, and I didn't (and won't) feel guilty about this. Maybe my comment was just a pretext show I recognised the case, and to post the link. – Marc van Leeuwen Feb 1 '14 at 15:21 An answer with $0$ votes does not hurt your reputation. If it is a good answer, it helps the site and someone may discover that it is a good answer and upvote it in the future. If it is a good answer, leave it. • There's also the allure of the badges Tenacious and Unsung Hero. – hardmath Jan 27 '14 at 21:26 • If it is an useless answer, it'll get downvoted. Or at least ignored. – vonbrand Feb 1 '14 at 16:41 Unless I notice (or am notified of) serious problems with an answer of mine, I keep them around, regardless of the score. I see no reason to delete such answers, as there is no telling whether some future visitor may come across an answer and find them valuable. Heck, it's possible that users have already found these answers valuable, but haven't gained enough reputation to upvote them. • Good point about the users who might not be able to upvote yet. – robjohn Jan 21 '14 at 19:57 • @robjohn: Or don't bother to upvote them. I have had a number of answers that didn't attract any upvotes at the time, but later did with a comment that indicated it was helpful, not just upvoting to keep the question from appearing again – Ross Millikan Jan 25 '14 at 4:24 Yes, in my opinion, it's not even close -- you should leave up any answer without wrong mathematics. I get the sense that upvotes don't work very well for answers (they work very well for questions). There is almost no correlation between the answers I write that I like very much and the answers I like that get upvotes. • Agreed. I've had several answers ride the HNQ rocket, and only one did I feel actually deserved that level of attention and votes. Happily, that is my most highly voted answer of all time on any site. :) – Wildcard Jan 6 '17 at 13:46 • What happens over time is that either these users gain enough reputation to upvote your answer But do they actually go back and upvote later? – starsplusplus Jan 24 '14 at 10:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4861968755722046, "perplexity": 1338.157065969331}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038119532.50/warc/CC-MAIN-20210417102129-20210417132129-00581.warc.gz"}
https://ec.gateoverflow.in/576/gate-ece-2014-set-4-question-28
16 views The unilateral Laplace transform of $f(t)$ is $\frac{1}{s^2+s+1}$. Which one of the following is the unilateral Laplace transform of $g(t) = t \cdot f(t)$? 1. $\frac{-s}{(s^2+s+1)^2}$ 2. $\frac{-(2s+1)}{(s^2+s+1)^2}$ 3. $\frac{s}{(s^2+s+1)^2}$ 4. $\frac{2s+1}{(s^2+s+1)^2}$ edited | 16 views
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8134048581123352, "perplexity": 436.0720456412023}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991537.32/warc/CC-MAIN-20210513045934-20210513075934-00321.warc.gz"}
http://mathoverflow.net/users/424/mike-usher
# Mike Usher less info reputation 614 bio website math.uga.edu/~usher location age member for 5 years seen Sep 15 at 11:29 profile views 851 12 Operations via Morse Theory 12 The core question of topology 12 The “miracle” of Heegard Floer. 12 Is the Fukaya category “defined”? 11 Why $\partial$ and $\bar{\partial}$ defined in that way (the Wirtinger derivatives)? # 2,067 Reputation +95 symplectic structure of tangent bundle of $\mathbb{S}^{n-1}$ +20 The “miracle” of Heegard Floer. +10 Why $\partial$ and $\bar{\partial}$ defined in that way (the Wirtinger derivatives)? +10 Symplectic blow-up # 2 Questions 7 Embeddings without nonvanishing normal vector fields 7 Gromov-Witten invariants counting curves passing through two points # 27 Tags 87 sg.symplectic-geometry × 16 17 gromov-witten-theory × 3 32 symplectic-topology × 5 14 gt.geometric-topology × 3 28 gn.general-topology × 3 14 ag.algebraic-geometry × 3 27 dg.differential-geometry × 5 14 morse-theory × 2 18 differential-topology × 3 12 4-manifolds × 3 # 1 Account MathOverflow 2,067 rep 614
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5502905249595642, "perplexity": 8942.29396325031}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119647865.10/warc/CC-MAIN-20141024030047-00004-ip-10-16-133-185.ec2.internal.warc.gz"}
https://aviation.stackexchange.com/questions/73867/how-do-jetliners-age-major-or-notable-checks-and-part-replacements
# How do jetliners age (major or notable checks and part replacements)? As commercial airliners can be (and are) operated for many years (or even decades), I've started to ponder - how do they actually age? Or - in other words - what are major or notable maintenance-related events (checks/replaces) during airliners' service life and how often do they happen? While there are many parts that come and go every $$x$$ hours of flight or $$n$$ takeoff/landing cycles that are hard to list, my first guess was the engines or on-board computers, but then I thought about the fuselage. I assume that they are designed to withstand whole service time, but at the same time material fatigue seems to be unavoidable, and this holds for many other parts. If that's possible, I'd especially like someone elaborated on Boeing 737NG and/or Boeing 747 (differences between the two - or lack thereof - might be especially intriguing), but any small to long range airliner will work. If that question needs fixing/clarification, I'll be really happy to do so! • You should look up what an "A/B/C/D check" is in regards to aircraft. – Ron Beyer Jan 28 at 20:34 • On the system side, read about MSG-3 sassofia.com/blog/…. Under MSG-3 most system components can be run indefinitely because they are periodically performance tested for degradation. For example, a hydraulic actuator that passes functional testing for internal/external leakage every C check (5-6 thousand hours) can run for the entire life of the airplane theoretically. – John K Jan 28 at 22:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3522971272468567, "perplexity": 2180.764440231227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141189141.23/warc/CC-MAIN-20201127044624-20201127074624-00441.warc.gz"}
https://zbmath.org/?q=an:0719.19003
× # zbMATH — the first resource for mathematics Equivariant bicycles on singular spaces. (English. Abridged French version) Zbl 0719.19003 The authors announce in this note some of their results concerning the geometric description of the equivariant Kasparov groups. These are generalizations of earlier results obtained by the first author and R. Douglas [Proc. Symp. Pure Math. 38, 117-173 (1982; Zbl 0532.55004)] and then extended by A. Connes and G. Skandalis [C. R. Acad. Sci., Paris, Sér. I 292, 871-876 (1981; Zbl 0529.58030); Publ. Res. Inst. Math. Sci. 20, 1139-1183 (1984; Zbl 0575.58030)]. The Connes- Skandalis construction of the groups $$KK(X,Y)$$ as sets of equivalence classes of bivariant cycles (bicycles) is extended into the wider context when both X and Y are Thom-Mather stratified spaces; the new construction relies on Goresky’s $$\pi$$-fiber condition and on the concept of a normally non-singular map between stratified spaces. Transversality is then used to give a direct construction of the intersection product $KK(X,Y)\otimes_{Z}KK(Y,W)\to KK(X,W).$ The map $$\mu$$ : KK(X,Y)$$\to KK(C_ 0(X),C_ 0(Y))$$ constructed by Connes and Skandalis is shown to be an isomorphism which transforms the intersection product into that of Kasparov. The authors’ method applies to the equivariant case as well, yielding the groups $$KK_ G(X,Y)$$ (for G a compact Lie group). The definition of the corresponding intersection product $KK_ G(X,Y)\otimes_{Z}KK_ G(Y,W)\to KK_ G(X,W)$ uses mainly the geometric Bott periodicity. As a byproduct, geometric realizations of the cobordism groups $$\Omega$$ (X), of the bivariant cobordism groups $$\Omega\Omega(X,Y)$$, as well as of their equivariant version $$\Omega \Omega_ G(X,Y)$$, are derived. ##### MSC: 19K35 Kasparov theory ($$KK$$-theory) 19L47 Equivariant $$K$$-theory 55N22 Bordism and cobordism theories and formal group laws in algebraic topology 57N80 Stratifications in topological manifolds
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7543466687202454, "perplexity": 858.4942468895573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057039.7/warc/CC-MAIN-20210920131052-20210920161052-00135.warc.gz"}
https://www.ideals.illinois.edu/handle/2142/22948
## Files in this item FilesDescriptionFormat application/pdf 9503260.pdf (5MB) (no description provided)PDF ## Description Title: A measurement of W boson-photon and Z boson-photon cross sections in the muon channel in 1.8 TeV proton-antiproton collisions Author(s): Luchini, Christopher B. Doctoral Committee Chair(s): Errede, Steven M. Department / Program: Physics Discipline: Physics Degree Granting Institution: University of Illinois at Urbana-Champaign Degree: Ph.D. Genre: Dissertation Subject(s): Physics, Elementary Particles and High Energy Abstract: We have measured the production cross sections x decay branching ratios for W + $\gamma$ and Z + $\gamma$ in the muon channel in $\sqrt{s}$ = 1.8 TeV p-p collisions, from 3.54 $\pm$ 0.24 $pb\sp{-1}$ of muon W and Z data from the CDF 1988-89 Tevatron collider run. For photons in the central region ($\vert\eta\vert$ 5.0 GeV and lepton-photon angular separation $\Delta R\sb{\ell\sb\gamma} >$ 0.7, 5 $W\sb{\gamma}$ candidates and 2 $Z\sb{\gamma}$ candidates were observed. From these events, the $\sigma \cdot BR(W + \gamma)$ and $\sigma \cdot BR(Z + \gamma)$ cross sections for the muon samples are measured, and compared with Standard Model predictions. We also determined the cross section ratios, $W\sb{\gamma}$/W, $Z\sb{\gamma}$/Z and $W\sb{\gamma}/Z\sb{\gamma}$, which, along with the previous CDF measurement(s) of the W/Z cross section ratio provide new insight on the Standard Model, and are sensitive to anomalous couplings of the W and Z bosons. Using the $W\sb{\gamma}$ and $Z\sb{\gamma}$ absolute cross section measurements, the absence of an excess of hard photons accompanying W and Z boson production enables us to obtain direct limites on anomalous $WW\sb{\gamma}$, $ZZ\sb{\gamma}$ and $Z\sb{\gamma\gamma}$ couplings. For saturation of unitarity, these experimental limits impose constraints on possible internal (composite) structure for the W and Z bosons with compositeness scale sensitivity up to $\Lambda\sb{W} \sim$ 1 TeV and $\Lambda\sb{Z} \sim$ 250-500 GeV, respectively. These compositeness scale limits probe the possible internal structure of the W to ${\sim}<$2 $\times$ 10$\sp{-4}$fm and internal structure of the Z to ${\sim}<$4-7 $\times$ 10$\sp{-4}$ fm. Limits on the anomalous $W\sb{\gamma}$ and $Z\sb{\gamma}$ couplings also impose constraints on the higher order static and transition EM moments of the W and Z boson, respectively. The muon channel measurements were combined with those made from the electron channel to arrive at $\mu$ + e combined $\sigma \cdot$ BR(W) + $\gamma$) and $\sigma \cdot$ BR(Z + $\gamma$) absolute cross sections, and limits on the anomalous couplings for combined $\mu$ + e $W\sb{\gamma}$ and $Z\sb{\gamma}$. Issue Date: 1994 Type: Text Language: English URI: http://hdl.handle.net/2142/22948 Rights Information: Copyright 1994 Luchini, Christopher B. Date Available in IDEALS: 2011-05-07 Identifier in Online Catalog: AAI9503260 OCLC Identifier: (UMI)AAI9503260 
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9661028385162354, "perplexity": 2577.7955766001364}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039617701.99/warc/CC-MAIN-20210423101141-20210423131141-00383.warc.gz"}
https://www.skepticalcommunity.com/viewtopic.php?f=6&t=26533&start=260
## Supreme Court strikes a blow for the First Amendment. Lies, damned lies, and statistics. Skeeve Posts: 10640 Joined: Wed Jun 09, 2004 7:35 am ### Left and right united in opposition to ... SCOTUS decision UPDATE: Left and right united in opposition to controversial SCOTUS decision Much has been made of late about the hyper-partisan political environment in America. On Tuesday, Sen. Evan Bayh explained his surprising recent decision to leave the Senate by lamenting a "dysfunctional" political system riddled with "brain-dead partisanship." It seems you'd be hard-pressed to get Republicans and Democrats inside and outside of Washington to agree on anything these days, that if one party publicly stated its intention to add a "puppies are adorable" declaration to its platform, that the other party would immediately launch a series of anti-puppy advertisements. But it appears that one issue does unite Americans across the political spectrum. A new Washington Post-ABC News poll finds that the vast majority of Americans are vehemently opposed to a recent Supreme Court ruling that opens the door for corporations, labor unions, and other organizations to spend money directly from their general funds to influence campaigns. Interesting....oh well. I doubt many folks here 'on the Right' are bleating about it too much. However, for all the other folks 'on the 'Right' who voted for dubya, just remember, he appointed two of the Justices that voted for it. Then Skank Of America could start in... corplinx Posts: 20330 Joined: Tue Jan 29, 2008 12:49 am Title: Moderator ### Re: Left and right united in opposition to ... SCOTUS decisi It could just as easily read "left and right united in support". See the fallacy there and bias there? There isn't a single group of people who support or oppose the decision in a uniform groupthink. DrMatt BANNED Posts: 29811 Joined: Fri Jul 16, 2004 4:00 pm Location: Location: Location! Where do foreigners and prisoners fall in this story? Grayman wrote:If masturbation led to homosexuality you'd think by now I'd at least have better fashion sense. corplinx Posts: 20330 Joined: Tue Jan 29, 2008 12:49 am Title: Moderator DrMatt wrote:Where do foreigners and prisoners fall in this story? Sarlacc Pit WildCat Posts: 14236 Joined: Tue Jun 15, 2004 2:53 am Location: The 33rd Ward, Chicago ### Re: Left and right united in opposition to ... SCOTUS decisi Skeeve wrote:UPDATE: No, it wasn't really an update. Do you have questions about God? you sniveling little right-wing nutter - jj Skeeve Posts: 10640 Joined: Wed Jun 09, 2004 7:35 am ### Re: Left and right united in opposition to ... SCOTUS decisi corplinx wrote: It could just as easily read "left and right united in support". See the fallacy there and bias there? ... As noted by the Post's Dan Eggen, the poll's findings show "remarkably strong agreement" across the board, with roughly 80% of Americans saying that they're against the Court's 5-4 decision. Even more remarkable may be that opposition by Republicans, Democrats, and Independents were all near the same 80% opposition range. Specifically, 85% of Democrats, 81% of Independents, and 76% of Republicans opposed it. In short, "everyone hates" the ruling. It looks pretty one-sided corplinx. corplinx wrote:There isn't a single group of people who support or oppose the decision in a uniform groupthink. I'm not sure what you mean, but I am willing to listen. WildCat wrote:No, it wasn't really an update. I have, I guess I missed this one, and your right it is a bit old, my bad. Then Skank Of America could start in... xouper Posts: 9114 Joined: Fri Jun 11, 2004 4:52 am Location: HockeyTown USA ### Re: Left and right united in opposition to ... SCOTUS decisi ... As noted by the Post's Dan Eggen, the poll's findings show "remarkably strong agreement" across the board, with roughly 80% of Americans saying that they're against the Court's 5-4 decision. Even more remarkable may be that opposition by Republicans, Democrats, and Independents were all near the same 80% opposition range. Specifically, 85% of Democrats, 81% of Independents, and 76% of Republicans opposed it. In short, "everyone hates" the ruling. It's discouraging to see how many politicians are in favor of violating the First Amendment. And that's exactly why it was put there. DrMatt BANNED Posts: 29811 Joined: Fri Jul 16, 2004 4:00 pm Location: Location: Location! Unanimous on "under god", remember? Grayman wrote:If masturbation led to homosexuality you'd think by now I'd at least have better fashion sense. Skeeve Posts: 10640 Joined: Wed Jun 09, 2004 7:35 am ### Re: Left and right united in opposition to ... SCOTUS decisi xouper wrote: ... As noted by the Post's Dan Eggen, the poll's findings show "remarkably strong agreement" across the board, with roughly 80% of Americans saying that they're against the Court's 5-4 decision. Even more remarkable may be that opposition by Republicans, Democrats, and Independents were all near the same 80% opposition range. Specifically, 85% of Democrats, 81% of Independents, and 76% of Republicans opposed it. In short, "everyone hates" the ruling. It's discouraging to see how many politicians are in favor of violating the First Amendment. And that's exactly why it was put there. Unless you are joking, I think they mean 'just folks' in this poll, not politicians. Then Skank Of America could start in... xouper Posts: 9114 Joined: Fri Jun 11, 2004 4:52 am Location: HockeyTown USA ### Re: Left and right united in opposition to ... SCOTUS decisi Skeeve wrote: xouper wrote: ... As noted by the Post's Dan Eggen, the poll's findings show "remarkably strong agreement" across the board, with roughly 80% of Americans saying that they're against the Court's 5-4 decision. Even more remarkable may be that opposition by Republicans, Democrats, and Independents were all near the same 80% opposition range. Specifically, 85% of Democrats, 81% of Independents, and 76% of Republicans opposed it. In short, "everyone hates" the ruling. It's discouraging to see how many politicians are in favor of violating the First Amendment. And that's exactly why it was put there. Unless you are joking, I think they mean 'just folks' in this poll, not politicians. I should have quoted this part instead: The poll's findings could enhance the possibility of getting a broad range of support behind a movement in Congress to pass legislation that would offset the Court's decision. Skeeve Posts: 10640 Joined: Wed Jun 09, 2004 7:35 am ### Re: Left and right united in opposition to ... SCOTUS decisi xouper wrote: Skeeve wrote: xouper wrote: ... As noted by the Post's Dan Eggen, the poll's findings show "remarkably strong agreement" across the board, with roughly 80% of Americans saying that they're against the Court's 5-4 decision. Even more remarkable may be that opposition by Republicans, Democrats, and Independents were all near the same 80% opposition range. Specifically, 85% of Democrats, 81% of Independents, and 76% of Republicans opposed it. In short, "everyone hates" the ruling. It's discouraging to see how many politicians are in favor of violating the First Amendment. And that's exactly why it was put there. Unless you are joking, I think they mean 'just folks' in this poll, not politicians. I should have quoted this part instead: The poll's findings could enhance the possibility of getting a broad range of support behind a movement in Congress to pass legislation that would offset the Court's decision. Ah, I see, but since... ... The decision was almost universally hailed by Republicans in Washington, who saw it as a victory for the free speech provided for under the Constitution, while President Obama and prominent Democrats in Washington almost universally derided it as a dark day for American democracy. If Democrats bring it up, the Republicans will bliock it and it and it will go no where anyway....or maybe not. We'll see. Then Skank Of America could start in... DrMatt BANNED Posts: 29811 Joined: Fri Jul 16, 2004 4:00 pm Location: Location: Location! The political will to not have an equivalent of our First Amendment exists in Europe. Grayman wrote:If masturbation led to homosexuality you'd think by now I'd at least have better fashion sense. Skeeve Posts: 10640 Joined: Wed Jun 09, 2004 7:35 am DrMatt wrote:The political will to not have an equivalent of our First Amendment exists in Europe. Maybe, however I doubt the GOP will start anything unless their base (or Rush and/or Glen) starts a shit storm, so that leaves the Dem's. Since the GOP seems to be naturally opposed to anything the Dems bring up.... Ah, well see... Then Skank Of America could start in... Nyarlathotep Posts: 47933 Joined: Fri Jun 04, 2004 2:50 pm ### Re: Left and right united in opposition to ... SCOTUS decisi corplinx wrote: There isn't a single group of people who support or oppose the decision in a uniform groupthink. Yes there is. the group of people who support the idea all support it and the group of people who oppose the idea oppose it. Remember: the first rule of tautology club is the first rule of tautology club... Bango Skank Awaits The Crimson King! Skeeve Posts: 10640 Joined: Wed Jun 09, 2004 7:35 am ### One solution? xouper wrote: tedly wrote:... Does it necessarily follow, from the corporation's creation as a person, that it is therefore a citizen, and entitled to contribute to political campaigns? I seem to recall a great deal of fuss in US politics about taking money from Korean sources, because, (I'm guessing) they weren't citizens. 1) With respect to the court decision and the First Amendment, it is irrelevant whether a corporation is a legal "person". 2) The court decision was not about contributions to political campaigns, it was about Congress making a law abridging freedom of speech. My solution: Limit total 'political-candidate' contributions by any person (real or fictional) or entity to a maximum of (lets say) $10,000 per year, TOTAL. And then leave free speech alone. YMMV Then Skank Of America could start in... Abdul Alhazred Posts: 72277 Joined: Mon Jun 07, 2004 1:33 pm Title: Yes, that one. Location: Chicago I have a better idea. No limits on contributions, but every contribution must be out in the open. Do you really think "limiting the amount" limits the amount? Any man writes a mission statement spends a night in the box. -- our mission statement plappendale WildCat Posts: 14236 Joined: Tue Jun 15, 2004 2:53 am Location: The 33rd Ward, Chicago ### Re: One solution? Skeeve wrote:My solution: Limit total 'political-candidate' contributions by any person (real or fictional) or entity to a maximum of (lets say)$10,000 per year, TOTAL. And then leave free speech alone. YMMV Why not $10/year? Do you have questions about God? you sniveling little right-wing nutter - jj WildCat Posts: 14236 Joined: Tue Jun 15, 2004 2:53 am Location: The 33rd Ward, Chicago Abdul Alhazred wrote:I have a better idea. No limits on contributions, but every contribution must be out in the open. Do you really think "limiting the amount" limits the amount? Yes, it would. If campaign contributions were limited to$10 per person the maximum amount colllected would be $10 times the population. Toni Preckwinkle got$150,000 from the SEIU for her Cook County board run... you don't think that will influence her when it comes to cutting the fat from the county budget? Do you have questions about God? you sniveling little right-wing nutter - jj Abdul Alhazred Posts: 72277 Joined: Mon Jun 07, 2004 1:33 pm Title: Yes, that one. Location: Chicago WildCat wrote:Toni Preckwinkle got \$150,000 from the SEIU for her Cook County board run... you don't think that will influence her when it comes to cutting the fat from the county budget? Better than getting it under the table and no one the wiser? What change in the law do you think it would take to make Cook County not corrupt? As distinct from enforcing the laws we have now? Any man writes a mission statement spends a night in the box. -- our mission statement plappendale Abdul Alhazred Posts: 72277 Joined: Mon Jun 07, 2004 1:33 pm Title: Yes, that one. Location: Chicago The longer I live here the more I am convinced that rank-and-file voters in Cook County actually like corruption. I also think this is increasingly true of Democrats generally. Any man writes a mission statement spends a night in the box. -- our mission statement plappendale
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20789016783237457, "perplexity": 8745.831453689974}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583659677.17/warc/CC-MAIN-20190118025529-20190118051529-00191.warc.gz"}
https://www.physicsforums.com/threads/bionomial-probability-distribution.17653/
# Bionomial Probability Distribution 1. Apr 2, 2004 ### PARAJON I need help on this problem.. my answer that i get is the following for A, but i'm not sure. can you help me with a and b. thank you...;. In a binomial situation n=5 and pie = .40 Determine the probabilities of the following events using the binomial formula. a. x = 1 n = 5 P (x) = nCx x (1 - ) n - x P (1) = 5! 1! (5-1)! 1 (.40) 1 (1-.40) 5-1 120 (.40) 1 (.60) 4 b. x = 2 n =5 2. Apr 2, 2004 ### matt grime The probability of an event lies between 0 and 1. You've got 5 choose 1 as 5!1!, when it's 5, and the 1 and 4 should be powers you're raising 0.4 and 0.6 to. If the probability of a success on trial is p (and q=1-p), then the probability of r successes in n trials is: $$\binom{n}{r}p^rq^{n-r}$$ where $$\binom{n}{r} = \frac{n!}{r!(n-r)!}$$ and is available as a button on your calculator
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7777823209762573, "perplexity": 1441.021097693482}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542648.35/warc/CC-MAIN-20161202170902-00134-ip-10-31-129-80.ec2.internal.warc.gz"}
https://lists.debian.org/debian-backports/2006/06/msg00157.html
# Re: default pin On Fri, 2006-06-23 at 23:20, martin f krafft wrote: > also sprach Thomas Walter <[email protected]> [2006.06.23.2302 +0200]: > > The only instability I see is, you no longer see always the same > > versions numbers for months/years. 8-) > > Which may be horrific. If you happen to have access to my book, > > \item[\emph{Software feature stability}]~\\ > Stability\index{stability!feature} may also refer to the feature set > provided by a software. In this definition, stable software does not > introduce drastic changes or radical new features from one release to the > next. Administrators appreciate feature stability because it allows them to > fix bugs with newer versions without risking unwanted changes to the > behaviour. > When using Debian and from one stable release to the other then you have Due to the fact that there may be several major/minor releases from upstream jumped over while the package lives in experimental/unstable/testing. Using backport one can mitigate big steps but profit from improvements in existing features/functionality. Maybe, new/additional functionality introduces new bugs, but this risk is reduced by the strict rules of backports ti use SW from testing only. What are "unwanted changes"? "Unwanted changes" by whom? In general upstream goes forward, not backward and does not release bug fix versions forever. Some time after releasing a new stable release, the previous is dead except for security fixes, which is done longer. Finally, the decision about what is useful, wanted and should be available is on the customer and not on the administrator. Sorry to say that, but I'm working in an environment where I'm faced with both sides or better with all 3 sides: customer, admin, developer/upstream. The customer asks upstream to implement a feature/functionality and upstream offers something to the customer. The admin is in the middle watching for security and availability to the customer. But the admin cannot patronize the customer or decide for the customer. In brief, if the customer wants version x, then the admin has to provide version x. > (I am now off to rewrite the laste sentence for clarity) > > Security updates fix bugs but ensure feature stability. Backports > may fix bugs but does not ensure feature stability. As the name/definition says, "security updates" fix bugs which are a security hole, but not bugs in features/functionality. That's a big difference. Or implies feature/functional stability to stay with the current buggy behavior? At least this I do not know from cusstomers and do not expact as customer. And one has to distinguish between external and internal intruders. Example: Some SW has a security bug when feed with special input. But this cannot happen from external. Most internal users are no intruders. One task of an admin is to block intruders from outside. Cheers, Thomas
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5044010281562805, "perplexity": 10312.882813330558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607848.9/warc/CC-MAIN-20170524152539-20170524172539-00177.warc.gz"}
http://coldattic.info/post/77/
One of the features of the web forum engine, a clone of which I develop in Ruby on Rails, is that it displays some site usage statistics. There are two kins of statistics it displays. First, it tracks and shows how many times a particular message was read (i.e. clicked), tracking if a user doesn't self-click his or her message. Second, it shows the generic site usage activity, unique visitors, and page views in the top-right corner. In this post I'll tell how I solved the problem of calculation the page view and visitors statistics. Impatient developers may jump directly to the solution section, while others are welcome to read about the path I walked from the most naïve activity tracking to the Wide Cache. ### Naïve solution The naïve solution was to store all site accesses in a table in the Application MySQL database. Each line in this table would represent one access to a site, keeping a user's hostname, and the access time. At each access, a corresponding line is inserted into the table. Then, if we are to display the information gathered, we run two simple db queries to get the pageview information (the time is the current time minus 10 minutes): SELECT COUNT(distinct host) FROM activities ; SELECT COUNT(*) FROM activities WHERE (created_at < '2012-04-01 18:33:25'); The writes to and reads from the activities table happened at each request. This was achieved via a before_filter in the root controller which is usually named ApplicationController, as hinted in this StackOverflow question: It spawned another process that handled all the activity business, and didn't prevent the main process from serving pages timely. As you might expect, the performance measurements were disastrous: the application could serve only (four) requests per second, while with all activity stuff turned off the throughput was about 40 rps. ### Storing the table in memcached The reason why the naive solution was not working as expected was lack of throughput. At first, I thought that since we could defer the database writes, and perform them after the request was served, we wont have any problems. However, this only addressed one of the performance metrics, response time, while what solution lacked was throughput. I also didn't want to introduce other technologies specifically for activity tracking. What I had at my disposal was memcached. Other components of my application used memcached to cache database reads and whole pages. We could, I thought, cache database writes with it as well. Rails storage with memcached backend supports two operations. The first is to write something under a string key, perhaps with a limited lifetime. the second is to read from a specified storage entry if anything has been written to it before, and if its lifetime is not over yet. That's basically all. We could try to use memcached to store the SQL table. itself. Indeed, that table may be viewed as an array of rows, so we could just read the table, append a row, and write the result back. However, for the sake of performance, memcached doesn't support any locking, so we can't just store the table described in the naive approach in a cache bucket. Two threads may read the array, then both append the next row, and both write, one row being discarded. This is a classical race condition. And my aim was to serve 30-40 requests per second, which means that race conditions that appear if this approach is used were inevitable. Besides, even if we could use locking (for instance, via Linux named locks (flocks)), it could even perform worse than the database, because it wouldn't be able to handle enough requests sequentially. We need a certain degree of concurrency here. ### Wide caching To mitigate the race conditions that happen during database writes, I used a technique I named "wide caching". I certainly reinvented the wheel here, but for the sake of clarity, I'll use this name in the rest of the post. The race condition from the previous section could be mitigated by distributing the big "table" into smaller, independent tables stored in different memcached chunks. Writes to each of the chunks would not be protected by a mutex, so a race condition would be possible. We could try to mitigate it by using a round-robin chunk allocation, i.e. a new thread writes to the chunk next to the one allocated to the previous thread. This solution, however, could be improved. First, round-robin allocation requires a central, synchronized authority to return proper chunks. This brings the same level of technological complexity as a single bucket with a lock. Second, round-robin doesn't guarantee the lack of race conditions either. A thread may stall long enough for the round-robin to make a whole "round", and to consider the chunk currently in process again. To avoid this, the allocator should be even more complex. Third, let's realize that we may trade precision for speed. The result will not be absolutely accurate. People look at the site usage statistics out of curiosity, and the figures may not be precise. We'll try to make them fast first. This all suggests a simple idea: get rid of the allocator! Just use a random bucket in each thread. This will not prevent us from race conditions either, but it will make them predictable. If the allocation is completely random, we can carry out experiments, and extend their results in time being sure that they will be reproducible. ### Decreasing effect of race conditions The previous paragraph consisted of reasons and measures that addressed the amount of potential accesses to the same cache bucket within a period of time. What also matters is how long this cache access is. The shorter it is, the more writes to a hash bucket per second we are going to handle correctly. #### Memcache request structure Rails provides a transient interface to various caches. The interface allows to store serialized Ruby structures in the cache. This picture shows the steps that happen during an update of a table in hash, the scale roughly representing the time each of the steps take. A typical hash bucket access consists of these steps: 1. reading raw data from cache; 2. deserializing, converting to Ruby format from a character format (the opposite to serializing; 3. appending a row to the deserialized table; 4. serializing to the character format (the opposite to deserializing); 5. writing raw data to the cache. We can see that steps 1, 2, 4, and 5 depend on the amount of records in the array we store in the hash bucket. The more information we record, the more time they take, and when the data are large enough, the time becomes proportional to the size. And if we just store all the activity data in cache, the size of the arrays being (de)serialized only grows in time! How could we decrease the size of buckets? The wide cache already decreases the expected size by a factor of width, could we do better? It turns out we can. Activity is a time-based data. The older the data are, the less interesting they become. In our use-case, we were only interested in the data for the latest 10 minutes. Of course, the routine cleanup of tables that drops all records with old enough timestamps is not sufficient: the activity data for 10 minutes are large enough (with 50 rps, and 50 being the cache width, the mean size would be 600). Time reminds us about another idea. Why do we load and then write the old data for the table? They don't change anyway! Since we're using a generic cache solution, and can't lurk into its read/write algorithm, we can't do it on a per-record basis (i.e. do not load the table, just append the record being inserted. What can we do instead? We may utilize another highly-accessible resource, which usage is however totally discouraged in building a reliable distributed algorithm. We have the clock that are easy and fast to access. The clock provides us with a natural discriminator: each, say, three seconds, abandon the old hash bucket, and start a new that will be responsible for storing the requests for the next three seconds. Given current time, we can always find the "layer" of the cache we should randomly select the bucket from. The discussion above was devoted to how to record the activity data. However, the data should be displayed in a timely manner. How timely? Let's consider that if the data are updated at least once per 30 seconds, it's good enough. Then, each 30 seconds we should reconcile the activity data written into the memcached, and "compress" it to an easily readable format. Since I already had MySQL implementation before, I piggybacked on this, and merely inserted the activity information to the SQL table, so the reading procedures do not have change at all! To get the statistics, we'll have to iterate all hash buckets the writing algorithm could fill, because gathering the ids of the buckets filled will require additional synchronization (additional writes), and, since the majority of them will be filled under high load, we'd better just collect them all. Theoretically, we should collect all the buckets that might have been filled during the last 10 minutes, the period we show the activity for. However, if we run the collecting algorithm each 30 seconds, we can only collect the data for the latest 30 seconds as well, since all the previous data should have already been collected. We'll have another race condition here. A worker may get the current time, generate a hash bucket id X, and then stall for several seconds. During that stall, the writing worker collects the commits all the data to the MySQL table, including the piece stored in X. The first worker wakes up, and writes the visit information to X, from which it will never be read, so the request is lost. To mitigate this race condition, we may collect and commit the data only from the layers that are deep enough. This won't help to avoid it completely, but will decrease its probability to the degree at which we can ignore it. ### The final Wide Caching solution If we assemble all of the above, we'll get the final solution that looks like this: When a page request arrives, it asks for the current site usage statistics as one of the steps during the page printing. The statistic data are requested from the activity table in MySQL database, and are cached for a medium amount of time (30 seconds), because more frequent updates do not improve user experience. After the page has been served, the request notifies the wide cache about the access to the site. First, it determines the "row" of the cache by rounding the current time in seconds since the Epoch down to the number divisible by three. Then, it determines the "column" by choosing a random number from 1 to 15. These numbers are appended; they form a unique identifier of a memcache entry. The website then reads the table from that entry, appends a row, and updates the same entry. Dumping the collected accesses to the DB is performs like this. After notification, the request also checks if there is a live memcached entry with a special name, and a lifetime equal to 30 seconds. If there is such entry, it means that the information in the DB is obsolete, so the algorithm starts to commit the information into the MySQL DB. While the information is being committed, there may appear other requests that would also check the lifetime of the memcached entry, and start the commit process. This is why the order, in which the memcached entries are being read is randomized, so that the amount of cases when the same information is committed twice is minimized. Here's the code. It really looks much shorter than this post that describes it. ### Experiments I mentioned that the setup contains race conditions that will lead to losing some requests. With the cache width of 15, and bucket height of 3 seconds, the payload of 35 requests per second made the tracker lose 350 out of 6800 requests. This is approximately 5% of total number of requests, which is acceptable. Because we randomized request queries, we may conclude that 5%—given these figures for requests per seconds, cache width, and the equipment—will be an average visit loss factor. #### Spawning In previous sections, I claimed that spawn-ing threads using spawn Rails gem, and writing to the database in the spawned processes/threads is slow. Indeed, spawn reinitializes the connection in the spawned thread, and this is already a considerable burden on the server if several dozens of threads are spawned each second. Here are the experimental details (see bug #67 where I first posted them info): MethodReqs. per sec. No activity40.2 With activity; no spawning27.4 With activity and spawning13.0 With wide cache36.7 This shows that (a) you should not use spawning for short requests, and (b) wide cache is really fast enough. ### Background periodic job or per-request? In the algorithm described above, activity data collection was started when a request arrives, and the application finds out that the currently cached activity stats are out of date. We were careful to make this case as painless as possible, but it has other implications that are not related to race conditions, and experiments show that the loss of throughput is acceptable if we don't try to spawn a thread for each this activity. Surprisingly, this method starts performing badly when the site activity is low rather than high. On a low-activity site, we can't rely on requests coming each second. Rather, they may arrive even less frequently that activity cache times. So, to support the low-activity case, we have to collect the information for caches that might have been filled in the latest 10 minutes (the maximum value a visit information could still be relevant for), not 30 seconds (the lowest possible time since the previous data collection). Otherwise, if users arrive less frequently than 30 seconds, the engine would always show zero activity, which would make users visit the site even less. This wouldn't be important, unless I used HAML template engine, which—and this is a known, sad bug—doesn't flush the page until the request is complete. Even putting the code to after_filter doesn't help. Experiments demonstrate that activity data reconciliation may take up to 1 second, so some requests will be served with an excessive 1-second delay, and when the rate of requests is low, this will constitute a considerable fraction of them. And I surely don't want my users to experience frequent delays during their infrequent visits. Spawn instead of after_filter? We have already seen that spawn-ing will make our server choke at the other extreme, when the load is high. Luckily, we have a solution that suits both cases equally well. It is to periodically invoke the activity data collection procedure, without tying it to requests. However, the periodic invocation of a task is not straightforward in Rails. I'll get back to it in another post. ### Future work The current implementation of the activity tracker is parametrized with cache attributes (cache lifetime, and width). However, this Wide Cache could be parametrized further, with the procedures that are executed, namely: 1. Cache bucket updating 2. Commit procedure 3. Reading what we previously committed I think that I'm going to need the same cache that doesn't always touch the database for the first kind of the activity, for visits of individual posts. This parametrization will help me to keep the code DRY, and to re-use the Wide Cache. This will require refactoring of visits and hits to a single function that calls a lambda function passed in the constructor. ### Other approaches #### Parse apache/nginx/rails logs. Indeed, the topmost web serving layer already collects the activity data: it carefully logs each visit, with the URL being visited, a user's IP address and user-agent. Instead of spending time on activity in a comparatively slow Rails application, we could use the logs of a "fast" web server. I have seen production systems that display activity data based on parsing nginx logs, and it may be integrated into Rails in such a way that it doesn't look like a kludge. Perhaps, free log parsers are already available on github... but this solution just doesn't look cool enough to me. #### Do no track activity Does anyone care about the visits at all? Maybe we just shouldn't track them? First, from my observation, everybody tries to stick a visit tracker into a website. A bigger activity also means that you're visiting something valuable. A Russian clone of the Facebook social network even used to display a fake registered user number counter at their homepage. Such fraud could only be justified by a huge benefit from displaying it, which means that the statistics is considered valuable by at least some very popular sites. Second, in my case, there was an easier answer. I should reimplement everything the engine I'm trying to clone contains by politics-related reasons: unless I reimplement everything, and manage to keep the site fast at the same time, my approach to the development will be considered a failure. #### Use a specialized activity tracking solution Too late. I have already implemented my own. But if you want some activity data on your website, do not hesitate to use one. It is hard, see above. ### Conclusion In one of the discussions on the very forum I'm reimplementing, someone wondered, why the view count on Youtube is not update in realtime for popular videos. Having gotten through a series of trial-and-failure experiments with visit tracking, I realize that the developers of Youtube surely had their reasons. My activity information is also already delayed, while the traffic volume still insignificant compared to even a single Lady Gaga video. It seems that during the development of this tracking I have reinvented a lot of wheels. This is how, I guess, the most primitive database and file systems caches look like. Their algorithms are, of course, more sophisticated, and are not built on top of the generic cache and serialization solutions, using their own, custom ones instead. But as any other re-invention of a wheel, it was fun, and I surely got a lot of understanding about higher-load web serving while trying to optimize this component of the forum engine.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2845763564109802, "perplexity": 1495.287820161961}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232262311.80/warc/CC-MAIN-20190527085702-20190527111702-00393.warc.gz"}
https://pendulumedu.com/qotd/mensuration-sphere-melted-into-cylinder-6-december-2021
Question of The Day06-12-2021 A copper Spherical ball of diameter 240 m. is melted and molded to form wire of length 4 km. What should be the diameter of Wire? Answer Correct Answer : b ) 48 m Explanation : According to the question, The spherical ball has been used to form cylindrical wire. We know that, Volume of the material would remain same, i.e., Volume of Spherical ball = Volume of Cylindrical wire. Let the radius of wire be R, radius of ball be r and height of the cylinder be h. Diameter of Spherical ball = 240 m Radius of the ball, r = $$240 \over 2$$=120 m Length of Cylindrical wire = h = 4 km ⇒ h = 4000 m We know that Volume of Sphere=$${4 \over 3}πr^3$$ The sphere is melted and recasted into the wire of cylindrical shape Volume of Cylinder = πr2h Where ‘r’ and ‘h’ are the radius and height of the cylinder, respectively. Volume of Spherical ball = Volume of cylindrical wire $${4 \over 3}πr^3 =πR^2h$$ Substituting the values $${4 \over 3}π*120*120*120=π*R^2*4000$$ ⇒ R2 = 576 ⇒ R = 24m Diameter of wire = 2 * 24 m ⇒ 48 m Hence, (b) is the correct answer. 0 COMMENTS
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9445918202400208, "perplexity": 2653.4745010093984}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304859.70/warc/CC-MAIN-20220125160159-20220125190159-00515.warc.gz"}
https://learnzillion.com/lesson_plans/3450-6-understanding-that-division-always-means-the-same-thing-regardless-of-the-strategy-used-c
# 6. Understanding that division always means the same thing, regardless of the strategy used (C) teaches Common Core State Standards 111.7.3.C http://ritter.tea.state.tx.us/rules/tac/chapter111/index.html teaches Common Core State Standards 5.C.2 http://www.doe.in.gov/standards/mathematics teaches Common Core State Standards CCSS.Math.Content.5.NBT.B.6 http://corestandards.org/Math/Content/5/NBT/B/6 teaches Common Core State Standards CCSS.Math.Practice.MP3 http://corestandards.org/Math/Practice/MP3 teaches Common Core State Standards MAFS.5.NBT.2.6 http://www.cpalms.org/Public/search/Standard ## You have saved this lesson! Here's where you can access your saved items. Dismiss Card of Lesson objective: Understand that division always means the same thing. Students bring prior knowledge of division from 4.NBT.B.6. This prior knowledge is extended to the meaning of division as students work to understand that mathematically, division always means the same thing. A conceptual challenge students may encounter is thinking that different strategies or division problems require different mathematical calculations. The concept is developed through work with area models and partial quotients, which students compare to arrive at the conclusion that despite the strategy, division always means the same thing. This work helps students deepen their understanding of divison because they are required to justify that, mathematically, division does not change even if the strategy or situation does. Students engage in Mathematical Practice 3 (Construct a viable arguement) as they justify why the meaning of division does not change, even if the strategy or type of problem does. Key vocabulary: • area model • dividend • division • divisor • partial quotients • quotient Related content
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8557502627372742, "perplexity": 3867.4182834408043}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218187690.11/warc/CC-MAIN-20170322212947-00450-ip-10-233-31-227.ec2.internal.warc.gz"}
https://planetcalc.com/503/
homechevron_rightLifechevron_rightDate and Time # Julian Day Calculation of Julian day for given date plus some information about it Astronomers often need to know difference between two dates, or be able to calculate next date of some periodic event. For events which are quite far one from another, like comet appearance, regular calendar is not well suited, due to different number of days in months, leap years, calendar reforms (Julian/Gregorian) and so on. Thus, Joseph Justus Scaliger, french astronomer (1540 - 1609) invented Julian Dates or Julian Days, named after his father, Julius Scaliger. And it is not about Julian calendar at all. Julian days is the counter, each day incremented by one. So, if you know value of Julian day for one date and value of Julian day for another, you can simply subtract one from another and find out the difference. The start of Julian days, called the start of the Julian era, is defined as noon of January, 1st, 4713 B.C. in the Julian calendar. With this date, all known historical astronomical observations have positive Julian day numbers, so all calculations are simple additions and subtractions. Julian day is a fractional number, where whole part corresponds to 12:00 AM, 0.25 is 6:00 PM, 0.5 - 12:00 PM, 0.75 - 6:00 AM, etc. Because first two digits of Julian day remain constant for about three centuries, sometimes more short version of Julian day, Modified Julian Date is used. The start of Modified Julian days is defined as midnight of November, 17th, 1858, and $MJD = JD - 2400000.5$ Also I should note, as a programmer, that this method - converting calendar date to some number and then use additions and subtractions - is always used by programmers. In javascript, for example, they use the number of milliseconds passed since January, 1st, 1970 as such counter. ### Julian Day Digits after the decimal point: 2 Julian Day Modified Julian Day
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 1, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7621994614601135, "perplexity": 2623.360674312629}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156224.9/warc/CC-MAIN-20180919122227-20180919142227-00118.warc.gz"}
https://mathoverflow.net/questions/66881/sets-as-combinatorial-games
# Sets as Combinatorial Games Just a few days ago my seemingly eternal and recurrent fascination for Conway's combinatorial game theory (CGT) & surreal numbers had a recrudescence, so I grabbed this excellent survey, and began reading. Some old thoughts came to the surface from the archives of my memory. Here they are: the class $SURREAL$ contains the class $ON$, and ordinals are the spine of $V$, the "universe of sets". So, pushing the analogy, can I say that combinatorial games generalize sets, or conversely sets are (special) combinatorial games? If the answer is yes, can I even go further, and develop some foundational theory which starts from games, not sets, and then define ordinary sets as those special games? This question can be broken down into 3 sub-questions: 1. does there exist a treatment of combinatorial games as a first order axiomatic theory, presented without the recourse to sets? 2. what kind of games are ordinary ZF sets? (perhaps "solitaire" games, where the opponent doesn't do anything, or perhaps perfectly symmetric games). In other words, assuming 1) above, which interpretations of ZF are available inside CGT? 3. could one reformulate some familiar constructions of classical set theory in the language of CGT? Any material, thoughts, refs, on 1) -3)? PS In this daydreaming I saw a picture of an extended universe where there is a double-cone of sets, V and -V, as in SURREAL there are positive and negative ordinals.... - I think the answer to the first question (in bold) is, "of course." Just as, quoting On Numbers and Games, • If $L, R$ are any two sets of numbers, and no member of $L$ is $\geq$ than any member of $R$, then there is a number $\{L|R\}$. All numbers are constructed this way we can say • If $L, R$ are any two sets of games, then there is a game $\{L|R\}$. All games are constructed this way. A good picture of a game is a tree with no infinite branches (so that we can argue by recursion; this corresponds to the phrase "all games are constructed this way", and is analogous to the Foundation axiom of set theory), where each edge is colored either Lavender or Red. If all edges are colored Lavender, say, then we have an ordinary (hereditary) set, and on the class of such games, the relation "$x$ is a member of $y$" is synonymous with "$x$ is an ($L$) option of $y$". It seems to me that a theory of games is perforce a theory of hereditary sets with not one but two membership relations $\in_L$, $\in_R$. You could of course give a theory of such just the same way you give a theory of sets, and it no more requires a pre-existing class of sets than the theory ZF requires a pre-existing class of sets. (Maybe I'm not exactly sure what you mean by "without recourse to sets".) The theory has two binary predicates $\in_L, \in_R$ where each separate predicate satisfies some axioms; they could be the ZF axioms, or the axioms of any other membership-based set theory. In combinatorial game theory, these two predicates don't interact (at the level of the axioms), but it might be interesting to contemplate theories where they do interact. Put this way, one can reformulate any construction of set theory in terms of game theory, but I don't see what advantage there would be. My own preferred development of the theory might be along the lines of Algebraic Set Theory (see the book by Joyal and Moerdijk). - Todd, great answer! A few observations: to begin with, not so sure it does not have any advantage. Back in the stone age, I studied the so-called topological games, ie games where the players choose an open set (and perhaps somthing else) alternatively. There are very subtle topological ideas which can be expressed in this language (see for instance here: rac.es/ficheros/doc/00232.pdf). Now, are we sure that playing sets cannot give deep insights into their structure? –  Mirco Mannucci Jun 4 '11 at 16:51 As for your axiomatization: of course it works, only thing is, I believe it is not conducive to new insights. I would better think of a set/game as the something that PROJECTS down into its left and right sides, its Jedi moves, and its Sith moves. epsilon is not the proper conceptualization, I think, something like your other suggestion, an algebraic categorical framework maybe better. As for hereditary sets, yes, but we could also consider games that are "circular", namely after a sequence of moves you may come back to the same place. That is beyond conventional CGT, but why not? –  Mirco Mannucci Jun 4 '11 at 16:57 Presumably, there is one way in which $\in_L$ and $\in_R$ do not separately satisfy the ZF axioms, but instead must "interact": the appropriate extensionality axiom would be that two games are equal if they have the same extension under both membership relations; simply having the same extension under one or the other would not suffice. –  Sridhar Ramesh Jun 4 '11 at 17:09 @Sridhar: whoops! Good point; I overlooked that. @Mirco: I like the idea of circular games; sort of like the analogue of sets with the antifoundation axiom. I believe this could be formalized along the lines of algebraic set theory (which I all too briefly mentioned); instead of considering an initial ZF algebra, in the sense of Joyal and Moerdijk, as capturing the cumulative hierarchy, one could consider a terminal ZF algebra instead, and perform arguments by corecursion. I'm pretty sure David Corfield has mentioned something along these lines at the n-Category Cafe. –  Todd Trimble Jun 4 '11 at 22:20 foundation of math, but so what? more than 30 years later cats are all over the places, providing a language (and tools) for almost everything under the sun. Now, do really I believe games are better than cats or sets? Dunno. But my gut feeling tells me that they unravel some basic intuition about math which is neither the "extensional" one (sets), nor the "functional" one (cats). What is it then? I would say, "free actions", in a given environment. Not to mention the fact that games predate both sets and functions in the history of civilization.... –  Mirco Mannucci Jun 7 '11 at 22:10 I wrote a little something about sets as 1-player games here. A set is the collection of moves you can make in a game. Well-foundedness means you always lose. So the only thing you can do is try to put off the inevitable for as long as you can. If you can model the state of a computer program as a game, you can use this to prove termination. As Todd says, there isn't really an advantage to be had by looking at things this way, except it's fun to think of sets in a different way, and I think it might help motivate some parts of set theory to some students who might otherwise find the concepts daunting. - Thanks for your EXCELLENT link! When I suggested the slogan "sets are solitaires" in my question I was thinking at something along your lines. By the way, I think your approach is a superb educational approach to set theory, that cuts through the mystique, and gives some "operational" insights into the beautiful world of sets. But I believe that this interpretation is not the only one possible: how about considering a set $X$ as "doubled"? I mean both L and R have the same set of moves, namely X itself. As for being useful, read my answer to Todd –  Mirco Mannucci Jun 4 '11 at 16:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7539143562316895, "perplexity": 591.4483123071863}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375098059.60/warc/CC-MAIN-20150627031818-00105-ip-10-179-60-89.ec2.internal.warc.gz"}
https://en.xen.wiki/w/Just_Hammond
# Just Hammond This article features just intervals created by the mechanical tonegenerator of the classical Hammond B-3 Organ model. ## Design of the Hammond B-3’s Tonegenerator Since 1935 the Hammond Organ Company’s goal was to market electromechanical organs[1] with 12-tone equally tempered (12edo) tuning. The mechanical tonegenerator of the Hammond B-3 Organ is based on a set of twelve different pairings of gearwheels that make (12*4) driven shafts turn. The corresponding driving gearwheels are mounted on a common shaft and turn all at the same rotational speed n1. Certain gears reduce, others increase rotational speed.[2] For every chromatic pitch class four driven shafts are installed. Pure octaves are generated by dedicated tonewheels (with 2, 4, 8, 16, 32, 64 or 128 high and low points on their edges) that rotate with the driven shafts. Each high point on a tone wheel is called a tooth. When the gears are in motion, magnetic pickups react to the tonewheels’ passing teeth and generate an electric signal that can be amplified and transmitted to a loudspeaker. For each pair of gearwheels the ratio of rotational speed n2/n1 is determined by the inverse ratio of the gearwheels’ integer teeth numbers Z1 and Z2: $\frac{Z_1}{Z_2}=\frac{n_2}{n_1}$ To calculate the rotational speed n2 of the driven shafts we write $n_2=\frac{Z_1}{Z_2}\cdot n_1$ Table 1: Pairings of Gearwheels[3] / Ratios and Intervals Pitch Class HAMMOND Organ Mechanical Gearing Gear Ratio (canceled) Conversion of raw ratio to interval Deviation from 12tone equal temperament (12edo) Intonation (Note A renders standard pitch, if shaft (A) is rotating @20 rev./sec) (A) (B) (C) (D) driving Z1[teeth] driven Z2[teeth] Fraction Ratio (C)/(D) [cents] [cents] [cents] C 85 104 85 104 0.8173077 -349.26 -49.26 -0.576 C# 71 82 71 82 0.8658537 -249.37 -49.37 -0.684 D 67 73 67 73 0.9178082 -148.48 -48.48 0.200 D# 105 108 35 36 0.9722222 -48.77 -48.77 -0.088 E 103 100 103 100 1.0300000 51.17 -48.83 -0.145 F 84 77 12 11 1.0909091 150.64 -49.36 -0.681 F# 74 64 37 32 1.1562500 251.34 -48.66 0.026 G 98 80 49 40 1.2250000 351.34 -48.66 0.020 G# 96 74 48 37 1.2972973 450.61 -49.39 -0.707 A 88 64 11 8 1.3750000 551.32 -48.68 0.000 A# 67 46 67 46 1.4565217 651.03 -48.97 -0.285 B 108 70 54 35 1.5428571 750.73 -49.27 -0.593 (Purple colored cells contain prime numbers) ## Just Intervals When we associate “ratios of the gearwheels’ integer teeth numbers” with “frequency ratios between partials” we realize an intrinsic just interval determined by integer teeth numbers within such mechanical gear - even without turning the shafts! Although the Hammond Organ pretends to generate a 12edo scale, the instrument in fact creates a high prime limit just scale. ## Tuning The whole set of frequency ratios is fixed by the design of the gear mechanism. The driving shaft’s (A) rotational speed n1 determines the instrument’s (master-) tuning. Rotating at exactly 1200 rpm (20 rev./sec), the pitch of note A equals precisely 27.500 Hz or one of its doublings. Therefore the instrument aligns note A with a concert pitch of 440.0 Hz. $f_\text{A}=20.0/\text{s}\cdot\frac{88}{64}\cdot(2^4)=440.0/\text{s} = 440.0 \text{ Hz}$ ## Mapping Hammond’s Rational Intervals to the Harmonic Series To find out, where the rational intervals played on a Hammond Organ occur in the harmonic series we • cancel the fractions of gear-ratios specified by Hammond and • calculate the least common multiple (LCM) of the denominators of "intervals of interest" by prime factorization • With this specific LCM we recalculate the numerators of the intervals. The resulting numerators correspond to the partial numbers we are looking for. Before we proceed, we have to agree on a numbering scheme for octaves in the harmonic series. ## Numbering Octaves We apply the scheme from the article First Five Octaves of the Harmonic Series and number the octaves as follows: • Integer octave numbering starts with #1 for the range between the 1st and < 2nd partial • The 2nd octave starts at partial #2 (= 21) and covers partials 2 and 3 • The 3rd octave starts at partial #4 (= 22) and covers partials 4, 5, 6 and 7 • The 4th octave starts at partial #8 (= 23) and covers partials 8, 9, 10, 11, 12, 13, 14 and 15. • ... This numbering scheme is consistent with the scheme used by Bill Sethares[4] : “In general, the nth octave contains 2n-1 pitches.” ## Mapping Hammond’s Rational Intervals (cont.): Examples The following examples illustrate how to map intervals or chords to the harmonic series. #### Example 1: Mapping a single interval In this example we map the combination of a Hammond Organ’s note E and a higher note A to the harmonic series. Table 2: Mapping the fourth E-A Pitch Class HAMMOND Gear Ratio (canceled) Prime Factorization Ascending Partial Numbers of an Overtone Scale Partial Found in Octave (C) (D) ...of Column (D) Recalculated Numerator = Partial# P=LCM *(C)/(D) Counted from 1/1: Octave# = 1+(ln(P)/ln(2)) Fraction (C)/(D) E 103 100 2 2 5 5 206 8.7 A 11 8 2 2 2 275 9.6 Multiply --------> to find (D)'s least common multiple (LCM) 2 2 2 5 5 200 (LCM) 8.6 Decimal printed for orientation only The resulting interval appears between partial # 206 and partial # 275. Thus the frequency ratio is (275:206), which equals 500.14 cents. #### Example 2: Mapping a chord Adding an upper fifth (note B), the second example illustrates how to map the resulting sus4-chord E-A-B to the harmonic series. Table 3: sus4-chord E-A-B Pitch Class HAMMOND Gear Ratio (canceled) Prime Factorization Ascending Partial Numbers of an Overtone Scale Partial found in Octave (C) (D) ...of Column (D) Recalculated Numerator = Partial# P=LCM *(C)/(D) Counted from 1/1: Octave# = 1+(ln(P)/ln(2)) Fraction (C)/(D) E 103 100 2 2 5 5 1442 11.5 A 11 8 2 2 2 1925 11.9 B 54 35 5 7 2160 12.1 Multiply --------> to find (D)'s least common multiple (LCM) 2 2 2 5 5 7 1400 (LCM) 11.4 Decimal printed for orientation only The supplemental note B establishes an additional prime factor. We find the matching pattern of partials for this sus4-chord (1442:1925:2160) farther up in the harmonic series, where this chord spans the boundary between the 11th and the 12th octave. #### Example 3: Mapping all of the tonegenerator's pitchclasses The full set of the Hammond Organ’s intervals resides surprisingly far up in the Harmonic Series: • The 44th octave starts at partial #(243), just below the set of partials determined by the Hammond Organ’s tonegenerator • The 45th octave starts right within the derived set of partials and starts at partial #(244) Table 4: The full set of intervals' position in the Harmonic Series Pitch Class HAMMOND Gear Ratio (canceled) Prime Factorization Ascending Partial Numbers of an Overtone Scale Partial Found in Octave # (C) (D) ... of Column (D) Recalculated Numerator = Partial # P=LCM *(C)/(D) Counted from 1/1: Octave# = 1+(ln(P)/ln(2)) Fraction (C)/(D) C 85 104 2 2 2 13 15,003,356,791,500 44.8 C# 71 82 2 41 15,894,517,438,800 44.9 D 67 73 73 16,848,249,818,400 44.9 D# 35 36 2 2 3 3 17,847,130,301,000 45.0 E 103 100 2 2 5 5 18,907,759,758,888 45.1 F 12 11 11 20,025,870,883,200 45.2 F# 37 32 2 2 2 2 2 21,225,337,107,975 45.3 G 49 40 2 2 2 5 22,487,384,179,260 45.4 G# 48 37 37 23,814,549,158,400 45.4 A 11 8 2 2 2 25,240,941,425,700 45.5 A# 67 46 2 23 26,737,439,929,200 45.6 B 54 35 5 7 28,322,303,106,240 45.7 Multiply --------> to find (D)'s least common multiple (LCM) 2 2 2 2 2 3 3 5 5 7 11 13 23 37 41 73 18,357,048,309,600 (LCM) 45.1 Decimal printed for orientation only ## Discussion No doubt - the evidence that a cluster of 12 simultaneously ringing semitones from a Hammond Organ is allocated around the 45th octave of the harmonic series is of limited practical value. The intervals' far-up placement is mainly caused by Laurens Hammond’s implementation of various prime numbers (11, 13, 23, 37, 41, 73) in different gearwheel pairings. • Respective high-order partials are very densely spaced (in the range of pico-cents) and intervals between successive partials up there are too narrow for musical applications by far • Due to its construction the tonegenerator selects only twelve from 17.6 trillion varieties in the 45th octave where… • the partial number associated with the LCM, which is located exactly 8/11 below pitch class A, is not addressed because there is no gear with transmission ratio 1.000 • no pure octave above a virtual root (1/1; partial# (244)) is playable, which would ring -624.997 cents way down from pitchclass A ## General Applicability The method of prime factorization to find the LCM can be applied to arbitrary intervals, chords or scales built from rational intervals to identify their position in the harmonic series. Simply replace the gear-ratios by just intervals of interest. ## References 1. Webressource https://en.wikipedia.org/wiki/Hammond_organ (retrieved December 2019) 2. Detailed photos of a similar M-1 tonegenerator are provided by https://modularsynthesis.com/hammond/m3/m3.htm (retrieved December 2019) 3. Gearing details were taken from http://www.goodeveca.net/RotorOrgan/ToneWheelSpec.html (retrieved Dec 29, 2019) The German Wikipedia provides the same technical information (in German): https://de.wikipedia.org/wiki/Hammondorgel#Tonerzeugung (retrieved Dec 29, 2019) The HammondWiki publishes a second, alternative set of gear ratios with slightly deviating pitch class “E”. Certain other pitch classes are shifted by pure octaves. http://www.dairiki.org/HammondWiki/GearRatio (retrieved Dec 29, 2019) 4. Sethares, William A. Tuning Timbre Spectrum Scale. London: Springer Verlag , 1999.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6544135808944702, "perplexity": 6490.916467713673}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107878921.41/warc/CC-MAIN-20201022053410-20201022083410-00622.warc.gz"}
https://www.deepdyve.com/lp/springer_journal/homotopically-invisible-singular-curves-gMGrzNu9nn
# Homotopically invisible singular curves Homotopically invisible singular curves Given a smooth manifold M and a totally nonholonomic distribution $$\Delta \subset TM$$ Δ ⊂ T M of rank $$d\ge 3$$ d ≥ 3 , we study the effect of singular curves on the topology of the space of horizontal paths joining two points on M. Singular curves are critical points of the endpoint map $$F\,{:}\,\gamma \mapsto \gamma (1)$$ F : γ ↦ γ ( 1 ) defined on the space $$\Omega$$ Ω of horizontal paths starting at a fixed point x. We consider a sub-Riemannian energy $$J\,{:}\,\Omega (y)\rightarrow \mathbb R$$ J : Ω ( y ) → R , where $$\Omega (y)=F^{-1}(y)$$ Ω ( y ) = F - 1 ( y ) is the space of horizontal paths connecting x with y, and study those singular paths that do not influence the homotopy type of the Lebesgue sets $$\{\gamma \in \Omega (y)\,|\,J(\gamma )\le E\}$$ { γ ∈ Ω ( y ) | J ( γ ) ≤ E } . We call them homotopically invisible. It turns out that for $$d\ge 3$$ d ≥ 3 generic sub-Riemannian structures in the sense of Chitour et al. (J Differ Geom 73(1):45–73, 2006) have only homotopically invisible singular curves. Our results can be seen as a first step for developing the calculus of variations on the singular space of horizontal curves (in this direction we prove a sub-Riemannian minimax principle and discuss some applications). http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Calculus of Variations and Partial Differential Equations Springer Journals # Homotopically invisible singular curves , Volume 56 (4) – Jul 10, 2017 34 pages /lp/springer_journal/homotopically-invisible-singular-curves-gMGrzNu9nn Publisher Springer Berlin Heidelberg Subject Mathematics; Analysis; Systems Theory, Control; Calculus of Variations and Optimal Control; Optimization; Theoretical, Mathematical and Computational Physics ISSN 0944-2669 eISSN 1432-0835 D.O.I. 10.1007/s00526-017-1203-z Publisher site See Article on Publisher Site ## You’re reading a free preview. Subscribe to read the entire article. ### DeepDyve is your personal research library It’s your single place to instantly that matters to you. over 12 million articles from more than 10,000 peer-reviewed journals. All for just $49/month ### Explore the DeepDyve Library ### Unlimited reading Read as many articles as you need. Full articles with original layout, charts and figures. Read online, from anywhere. ### Stay up to date Keep up with your field with Personalized Recommendations and Follow Journals to get automatic updates. ### Organize your research It’s easy to organize your research with our built-in tools. ### Your journals are on DeepDyve Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more. All the latest content is available, no embargo periods. ### Monthly Plan • Read unlimited articles • Personalized recommendations • No expiration • Print 20 pages per month • 20% off on PDF purchases • Organize your research • Get updates on your journals and topic searches$49/month 14-day Free Trial Best Deal — 39% off ### Annual Plan • All the features of the Professional Plan, but for 39% off! • Billed annually • No expiration • For the normal price of 10 articles elsewhere, you get one full year of unlimited access to articles. $588$360/year billed annually 14-day Free Trial
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7939309477806091, "perplexity": 1546.461371717937}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891817437.99/warc/CC-MAIN-20180225205820-20180225225820-00514.warc.gz"}
http://mathhelpforum.com/differential-geometry/224334-tough-connected-problem.html
## Tough Connected Problem I just need help understanding this problem: Suppose that $\{X_i,\Omega_i) : i\in I\}$ is a family of topological spaces and suppose that $\bold{x}$ and $\bold{y}$ are members of $\prod \{X_i : i\in I\}$. If $\bold{x}$ and $\bold{y}$ differ in only one coordinate, prove that they must lie in the same connected component. What we have to work with: Let $(X,\Omega)$ and $(Y,\Theta)$ be topological spaces. If $(X,\Omega)$ is connected and the function $f: X \to Y$ is continuous relative to $\Omega$ and $\Theta$, then $f(X)$ is connected as a subspace of $(Y,\Theta)$. Suppose that $\{X_i,\Omega_i) : i\in I\}$ is a family of topological spaces. Fix $i\in I$; and for all $j\in I - \{i\}$, fix $a_j \in X_j$. Define the function $f_i: X_i \to \prod\{X_i : i\in I\}$ by $f_i(x) = \bold{x}$, where $\pi_i(\bold{x}) = x$ and $\pi_j(\bold{x}) = a_j$ when $j\neq i$. Using this map, it can be shown that $(X_i,\Omega_i)$ is homeomorphic to the subspace $f_i(X_i)$. Both of these have been proven in class. I'm just failing to see how to use these when one coordinate differs. Any help is greatly appreciated!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 25, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8951351642608643, "perplexity": 43.6465019149741}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698543316.16/warc/CC-MAIN-20161202170903-00477-ip-10-31-129-80.ec2.internal.warc.gz"}
http://link.springer.com/article/10.1007%2Fs10240-012-0046-6
, Volume 117, Issue 1, pp 179-245 Date: 14 Nov 2012 # A variational approach to complex Monge-Ampère equations Rent the article at a discount Rent now * Final gross prices may vary according to local VAT. ## Abstract We show that degenerate complex Monge-Ampère equations in a big cohomology class of a compact Kähler manifold can be solved using a variational method, without relying on Yau’s theorem. Our formulation yields in particular a natural pluricomplex analogue of the classical logarithmic energy of a measure. We also investigate Kähler-Einstein equations on Fano manifolds. Using continuous geodesics in the closure of the space of Kähler metrics and Berndtsson’s positivity of direct images, we extend Ding-Tian’s variational characterization and Bando-Mabuchi’s uniqueness result to singular Kähler-Einstein metrics. Finally, using our variational characterization we prove the existence, uniqueness and convergence as k→∞ of k-balanced metrics in the sense of Donaldson both in the (anti)canonical case and with respect to a measure of finite pluricomplex energy.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8638917803764343, "perplexity": 917.0793227215864}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500822560.65/warc/CC-MAIN-20140820021342-00212-ip-10-180-136-8.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/applied-stats-help-dont-even-understand-the-question.371008/
# Homework Help: Applied Stats Help - Don't even understand the question 1. Jan 20, 2010 ### jimbodonut 1. The problem statement, all variables and given/known data Suppose x = (x1, x2, ..., xn)T ∈ Rn is a random vector drawn from the n-dimensional standard Gaussian distribution N(0, I), where 0 = (0, 0, ..., 0)^T (0 vector transpose) and I is the identity matrix. (b) On the average, how far away (in terms of squared Euclidean distance) from the origin do you expect x to be? In other words, what is E(||x||^2)? (c) Now suppose we fix x and draw another random vector z from N(0, I). If we project z onto the direction of x, how far away from the origin (again, in terms of squared Euclidean distance) do you expect the projection to be? (Hint: Let u be the unit vector pointing in the direction of x. Then, uT z is the projection of z onto the direction of x. Find the expectation and variance of uT z conditional on u.) 2. Relevant equations 3. The attempt at a solution no attempt... don't even understand the question... :P Thanks guys... ur help is greatly appreciated... 2. Jan 20, 2010 Start with the univariate case: if Z is normal 0, 1, what do you know about the distributoin of Z^2? If $$Z \sim \text{MVN}_n \big(0, I\big)$$, how does the univariate case relate to $$|Z|^2$$ in the multivariate case? Once you know the distribution of $$|Z|^2$$ you can answer the second question. Work on those before thinking about the third. 3. Jan 20, 2010 ### jimbodonut i figured out a) and b). It is a chi-square distribution since the ||x|| = sqrt(x1^2 + x2^2 + ... + xn^2) thus ||x||^2 = x1^2 + x2^2 + ... + xn^2. Since xi~N(0,1), it is chi-square. and the expectation of a chi-square distribution is its degrees of freedom... in this case... E(||x||^2) = n but now what's c)???
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9566787481307983, "perplexity": 1313.9609001174483}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156901.47/warc/CC-MAIN-20180921072647-20180921093047-00066.warc.gz"}
http://math.stackexchange.com/users/5775/zol-tun-kul?tab=activity&sort=all&page=3
Zol Tun Kul Reputation 1,879 Top tag Next privilege 2,000 Rep. Oct 3 accepted Deriving $\frac{8}{\sqrt{x-2}}$ Oct 2 awarded Notable Question Oct 2 asked Deriving $\frac{8}{\sqrt{x-2}}$ Oct 1 comment Why $(-1 \cdot h) = -1$ when $h$ approaches $0$? By the way, if $h \not = 0$, how come the denominator is $(x-1)^2$? I had said that "$h$ is practically $0$" but I'm not that convinced anymore. Oct 1 accepted Why $(-1 \cdot h) = -1$ when $h$ approaches $0$? Oct 1 asked Why $(-1 \cdot h) = -1$ when $h$ approaches $0$? Sep 29 awarded Popular Question Sep 20 comment Proving that $\lim_{x\to3}\frac{x}{4x-9}=1$ @AndréNicolas I see now. Thanks. By the way, you picked $\frac{1}{4}$ because it made arithmetic easier - but how can I determine the "largest acceptable value" that can replace $\frac{1}{4}$? (out of curiosity) Sep 20 accepted Proving that $\lim_{x\to3}\frac{x}{4x-9}=1$ Sep 20 comment Proving that $\lim_{x\to3}\frac{x}{4x-9}=1$ @EWHLee I see. I wonder if $\frac{1}{4x-9}$ should have been $\frac{1}{|4x-9|}$ instead... Wait, no, that doesn't work either. Goddammit. What should I have done instead? Sep 20 comment Proving that $\lim_{x\to3}\frac{x}{4x-9}=1$ @AndréNicolas Ahhhh yes, that's right. By the way, isn't $|(x-3)| < \delta < \frac{9}{4}$ enough to keep $|4x-9|$ away from $0$ since $\frac{9}{4}$ is the solution to $4x-9$? Sep 20 asked Proving that $\lim_{x\to3}\frac{x}{4x-9}=1$ Sep 20 asked Why is $|(x-2)| < \delta \le 1$ true when proving $\lim_{x\to2}(3x^2-x)=10$? Sep 19 comment Proving $\lim_{x\to1}(x^2+3)=4$ Hm I'm not very sure how can $|x-1| < \delta$ lead to $|x+1|\le 3$. If you increase $|x-1|$ by $2$ to reach $|x+1|$, shouldn't it be $|x+1| \le 2$ rather than $3$? Sep 19 accepted Proving $\lim_{x\to1}(x^2+3)=4$ Sep 19 comment Why does $|x-1|^2+3|x-1| < \epsilon \implies |x-1|^2 < \frac{\epsilon}{2} \ \ \land \ \ 3|x-1| < \frac{\epsilon}{2}$? @Oleg567: What if $a = b = \epsilon/2$? Sep 19 asked Why does $|x-1|^2+3|x-1| < \epsilon \implies |x-1|^2 < \frac{\epsilon}{2} \ \ \land \ \ 3|x-1| < \frac{\epsilon}{2}$? Sep 19 asked Proving $\lim_{x\to1}(x^2+3)=4$ Sep 12 asked Determinant of a matrix with trigonometry functions. Sep 7 asked Determining what values in a system can cause infinite/unique/no solutions.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6041571497917175, "perplexity": 999.6950477547348}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440646313806.98/warc/CC-MAIN-20150827033153-00332-ip-10-171-96-226.ec2.internal.warc.gz"}
https://www.physicsoverflow.org/40617/what-are-donaldson-thomas-invariants-physical-string-theory
# What are Donaldson-Thomas Invariants in Physical String Theory? + 4 like - 0 dislike 666 views Let $X$ be a projective, smooth Calabi-Yau threefold and let $Z \subset X$ be a subscheme supported on curves and points.  Its structure sheaf $\mathcal{O}_{Z}$ fits into the short exact sequence $$0 \to \mathcal{I}_{Z} \to \mathcal{O}_{X} \to \mathcal{O}_{Z} \to 0.$$ One can show that the D-brane charge of $\mathcal{O}_{Z}$ is $$\mathcal{Q}(\mathcal{O}_{Z}) = \text{PD}\bigg(\text{ch}(\mathcal{O}_{Z}) \sqrt{\text{td}X}\bigg) = (0,0, \beta, n),$$ where $\beta = [Z]_{\text{red}} \in H_{2}(X, \mathbb{Z})$ and $\chi(\mathcal{O}_{Z})=n$.  Mathematically, the Donaldson-Thomas invariants are a (virtual) count of ideal sheaves corresponding to $\mathcal{O}_{Z}$ with fixed charge $\mathcal{Q}$. Now for the physics...I believe one thinks of $\mathcal{O}_{Z}$ as a bound state of D2-D0 branes.  The morphism $\mathcal{O}_{X} \to \mathcal{O}_{Z}$ is the coupling to a single D6-brane.  One says (I think) that the Donaldson-Thomas invariants in B-model topological string theory computes BPS states associated to D6-D2-D0 B-brane bound states.  My first question is: 1. What kind of BPS states are these?  Are they BPS particles in 4d? My second question is: 2. What are Donaldson-Thomas invariants or the corresponding partition function in some physical string theory?  In the paper (https://arxiv.org/pdf/hep-th/0403167.pdf) even though they don't call them that, they're talking about DT invariants in Type IIB string theory.  But they talk about them as D5-D1-D(-1) bound states.  How do these relate to the D6-D2-D0 bound states in the topological sector?  Obviously, there are subtle points translating between topological and physical branes.  And in Type IIB, indeed the D$p$-branes must have $p$ odd. My final question is of a different flavor than those above, but still interesting I think. 3. Mathematically, the structure sheaves $\mathcal{O}_{Z}$ can have "thickened" or non-reduced structure.  What role does this thickening play in physics?  What is a "thickened" brane, if indeed one should think of it that way? + 4 like - 0 dislike 1) Compactifying IIA string theory on a Calabi-Yau 3-fold $X$, we get in the four non-compact dimensions a theory with $\mathcal{N}=2$ supersymmetry. D6-D2-D0 branes wrapped on $X$, with one non-compact direction (time direction), define BPS particles in 4d. 2) The spectrum of BPS particles in a $\mathcal{N}=2$ 4d theory depends on the vector multiplet background (in the vector multiplet moduli spaces, there are real codimension one walls along which the BPS spectrum changes). For IIA on X, the vector multiplet moduli space is the stringy Kähler moduli space, which near the large volume limit, is parametrized by the Kähler class on $X$ and the B-field. Donaldson-Thomas invariants (in the geometric sense of counts of ideal sheaves) are counts (more precisely supersymmetric indices) of the BPS states, of charge $(1,0,\beta,n)$, in the large volume limit (large Kähler class) and large B-field. If one considers IIB on $X$, one gets another $\mathcal{N}=2$ 4d theory, and  D5-D1-D(-1) branes completely wrapped on $X$ define instantons (in the broad sense of spacetime localized objects) in 4d. These instantons give quantum corrections to the hypermultiplet moduli space, which is a torus fibration over the stringy Kähler moduli space of $X$. These instanton corrections depend on the precise locus in the stringy Kähler moduli spaces and Donaldson-Thomas invariants enter these instanton corrections in the large volume and large B-field story. These two stories, IIA or IIB on $X$, with Donaldson-Thomas invariants counting either particles or instantons in 4d, are compatible, as is clear by a further compactification on a circle. Compactifying IIA on $X \times S^1$, the vector multiplet moduli space becomes a torus bundle over the stringy Kähler moduli space, with 3d instantons corrections coming from BPS particles in 4d wrapping the circle direction. Applying T-duality along the circle, one exchanges IIA and IIB and we get exactly the IIB hypermultiplet moduli space, with corrections now coming from instantons in 4d. 3) The thickening has to do with multiple branes coming together and with possibly non-trivial Higgs vev turned on (see eg https://arxiv.org/abs/hep-th/0309270 ) answered Dec 25, 2017 by (5,140 points) I'm not completely sure I understand your second to last paragraph about reconciling the IIA and IIB picture.  Perhaps part of my confusion, do D$p$-branes still have a $(p+1)$-dimensional worldvolume even if they give rise to instantons in 4d?  I understand the distinction between IIA and IIB, as well as the difference between BPS particles and BPS instantons, but I'm afraid I'm still a bit confused how the dimension-counting works in reconciling things.  But thanks a lot for your great response. Upon reflection, I think I understand your second to last paragraph after all.  Unless I'm mistaken, if you have a D$p$-brane in IIA or IIB wrapping a $p$-cycle $\Sigma$ in $X$ giving rise to a BPS particle in 4d, you can compactify on the time circle and apply T-duality to get a BPS instanton in 3d in the dual theory.  The BPS instanton in 3d arises from a "Euclidean brane" ED($p$-1) with a $p$-dimensional worldvolume $\Sigma$.  This explains how D6-D2-D0 bound states in IIA turn into D5-D1-D(-1) bound states in IIB, yet with the same support in the Calabi-Yau.  Does that sound right? Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor)   Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ysicsO$\varnothing$erflowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.842963457107544, "perplexity": 1307.7386919833086}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662562410.53/warc/CC-MAIN-20220524014636-20220524044636-00777.warc.gz"}
https://www.lumoslearning.com/llwp/resources/common-core-practice-tests-and-sample-questions/practice-test.html?cid=1746&lid=34870&qid=299454
# Lumos StepUp ACT Practice Program - Mathematics Weighing the Possible Outcomes #### Get Full Access to Lumos StepUp ACT Practice Program - Mathematics Currently, you have limited access to Lumos StepUp ACT Practice Program - Mathematics. The Full Program includes, Online Program ## A spin on the casino "wheel of chance'' gives you an equal opportunity to land on 4 spots. The probability of landing on each spot is then 1 in 4, or 0.25. Each spot has its own payout value as shown below. Find the expected value after one spin.Section 1: $100Section 2: -$100Section 3: $250Section 4: -$250 ### Ratings Rate this Question? 0 0 Ratings & 0 Reviews 5 0 0 4 0 0 3 0 0 2 0 0 1 0 0
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34200766682624817, "perplexity": 5801.553633658114}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987836368.96/warc/CC-MAIN-20191023225038-20191024012538-00531.warc.gz"}
http://mathhelpforum.com/differential-geometry/131717-applying-baire-s-thm.html
# Math Help - Applying Baire's Thm 1. ## Applying Baire's Thm Call a vector X = (x, y) in R2 "resonant" if it satisfies an equation of the form ax + by + c = 0 where a, b, c are integers, not all zero. Otherwise X is non-resonant. Use Baire theory to show that the set of "non-resonant" vectors is dense in R2. All I have so far is that a vector is non-resonant if it satisfies x not rational, y not rational or x /= q1 + q2*y, where q1 and q1 are rational. Note: Apply Baire's theory is a nice way of saying that we have to show the set is thick; that is, the intersection of countably many open and dense sets. 2. Originally Posted by southprkfan1 Call a vector X = (x, y) in R2 "resonant" if it satisfies an equation of the form ax + by + c = 0 where a, b, c are integers, not all zero. Otherwise X is non-resonant. Use Baire theory to show that the set of "non-resonant" vectors is dense in R2. All I have so far is that a vector is non-resonant if it satisfies x not rational, y not rational or x /= q1 + q2*y, where q1 and q1 are rational. Note: Apply Baire's theory is a nice way of saying that we have to show the set is thick; that is, the intersection of countably many open and dense sets. Do you have to show first that $\mathbb{R}^2$ is a Baire space or can you just conclude that it is since it's a completely metrizable space? 3. Originally Posted by Drexel28 Do you have to show first that $\mathbb{R}^2$ is a Baire space or can you just conclude that it is since it's a completely metrizable space? No. We can just use the fact that R2 is complete and so all we have to do is show that the required set (that is, of non-resonant vectors) is thick and the conclusion will follow. 4. Originally Posted by southprkfan1 No. We can just use the fact that R2 is complete and so all we have to do is show that the required set (that is, of non-resonant vectors) is thick and the conclusion will follow. My first inclination is that clearly if we let $K$ be the set of resonant vectors then clearly $\varphi:K\mapsto\mathbb{Z}^3$ given by $ax+by+c\mapsto (a,b,c)$ is an injection. Thus, $K$ is countable. So, for every $k\in K$ consider $\mathbb{R}^2-k$. This is clearly an open dense set and $\mathbb{R}^2-K=\bigcap_{k\in K}\left\{\mathbb{R}^2-k\right\}$. That could be wrong though, I didn't check it. 5. Originally Posted by Drexel28 My first inclination is that clearly if we let $K$ be the set of resonant vectors then clearly $\varphi:K\mapsto\mathbb{Z}^3$ given by $ax+by+c\mapsto (a,b,c)$ is an injection. Thus, $K$ is countable. So, for every $k\in K$ consider $\mathbb{R}^2-k$. This is clearly an open dense set and $\mathbb{R}^2-K=\bigcap_{k\in K}\left\{\mathbb{R}^2-k\right\}$. That could be wrong though, I didn't check it. Are we sure the function is an injection? I think you may have to add a condition that (a,b,c) have no common factor or something? 6. Originally Posted by southprkfan1 Are we sure the function is an injection? I think you may have to add a condition that (a,b,c) have no common factor or something? $\phi(ax+by+c)=\phi(a'x+b'y+c')\implies (a,b,c)=(a',b',c')\implies$ $a=a'\text{ }b=b'\text{ }c=c'\implies ax+by+c=a'x+b'y+c'$ 7. Originally Posted by southprkfan1 Call a vector X = (x, y) in R2 "resonant" if it satisfies an equation of the form ax + by + c = 0 where a, b, c are integers, not all zero. Otherwise X is non-resonant. Use Baire theory to show that the set of "non-resonant" vectors is dense in R2. All I have so far is that a vector is non-resonant if it satisfies x not rational, y not rational or x /= q1 + q2*y, where q1 and q1 are rational. Note: Apply Baire's theory is a nice way of saying that we have to show the set is thick; that is, the intersection of countably many open and dense sets. For any $(a,b,c)\in \mathbb{Z}^3$ , not all 0,you can easily check that $V_{a,b,c}:=\{(x,y,z):ax+by+c\neq 0\}$ is open. If you show that $V_{a,b,c}$ is dense, you conclude because the set $V$ of all non resonant vectors is the countable intersection of all the $V_{a,b,c}$. To check this density you have to assume $ax+by+c=0$ and show that there is $(x_1,y_1)$ "as near as desired" from $(x,y)$ such that $ax_1+by_1+c\neq 0$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 27, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9848668575286865, "perplexity": 251.48271203561495}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936468546.71/warc/CC-MAIN-20150226074108-00132-ip-10-28-5-156.ec2.internal.warc.gz"}
https://warcraftcardgame.com/growing-up-game-guide/
Game PC # Growing Up 100% Walkthrough Guide – Portal Virtual Reality Growing Up Complete Guide: Achievements, Characters, Locations, Purchasing Useful Items, Mood Management, Brain Map Tactics, Exams, and Endings. It will take 4 playthroughs to open all the achievements. There are no difficult achievements, but there are those that depend on randomness, the game may stubbornly not give some character as a possible friend, or generate unsuccessful situations in exams. ### Content • Achievements – Story • Achievements – Endings • Achievements – Stories • Achievements – Miscellaneous • Players • Locations • Useful purchases • Mood • Brain Map: Tactics • Exams and endings ## Achievements – Story These achievements are so simple that they will most likely be unlocked during the first playthrough. Related Articles • ### CV POCKET – The Game of Life in 20 Minutes Hello world!Complete the child stage. Leaving homeComplete the preschool stage. First stepsLearn 10 skills. The light of my eyesFulfill the wait. Big worldComplete the elementary school stage. PerfectionistMaster 20 skills. Someone’s in trouble!Don’t live up to the expectation. My personal items!Buy an item for the first time. Path selectionMake a path choice on the skill tree. Excellent studentGet an A on an exam. Wow!Use brain map bonuses 100 times. ChangesComplete the high school stage. Mind gamesGet a total of 300 attribute points on a single brain map. From 9 to 5Complete 10 waits. sightsGo on a trip! smart assMaster 40 skills.Master = Completely fill the skill learning meter (by adding the skill to the graph or by spending action points). I will definitely hand overGet fully prepared for the exam.Preparation increases the study and development of skills from the school curriculum, hobbies do not affect. Pride of parentsBuy a special request.Raise Parent Satisfaction above 70 to unlock Special Requests. EntertainmentTake part in entertainment activities 100 times.For one pass. ## Achievements – Endings Quite an adult!End life!Complete the game by passing the final exam with any grade. The American DreamGet the best ending for any career.You need to get an A grade for the final exam. Raise the preparation to 100, this will give more moves. Try to collect 8+ cells together, each such combination will give an Eureka booster, with which you can remove all cells of the same color. Better luck next timeGet a bad ending.There are two bad endings in the game (one of which is not so bad), they can be obtained by lowering Parental Satisfaction or Peace of Mind to zero. The first 2 times the counter of Disruptions will increase, the final will come on the third. Available from elementary school. • Lower Parental Satisfaction. Sit back, have fun all day long, never study. As a result, the ancestors will get it, and they will kick you out with a tablecloth in the ass to the military academy. • Decrease Peace of Mind. Study and never rest, overwork, get depressed and that’s it. SingleFinish the game without friends.Ignore all three characters that the game gives you as possible friends. By the middle of high school, they should all be falling apart. Apple from apple treeName the child after the parent.Continue playing as a “dynasty” by naming the child exactly as your character’s name was. ## Achievements – Stories friends storiesThe game randomly selects three characters from the list and adds as possible friends: 1 in kindergarten, 1 in middle school, 1 in high school. There is no dependence on stats and decisions, characters are always random. The only way to affect spawning is to continue playing as your character’s child after completing the story (the game never adds the same friends the parents had). But in a generation, repetitions will inevitably begin. To complete the story, you must not ignore and openly send the characters, then they will remain friends, and towards the end of high school, an achievement will open. Number oneComplete Alex’s story!Appears in kindergarten. This is what the future sounds likeComplete Richard’s story!Appears in kindergarten. GirlfriendComplete Alicia’s story!Appears in high school. Wendy KruegerComplete Wendy’s story!Appears in high school. Escape from fateComplete Nathan’s story!Appears in high school. spirit of youthComplete Patty’s story!Appears in high school. toilet storiesComplete Felicity’s story!Appears in high school. Two worldsComplete Kato’s story!Appears in high school. public pressureComplete Sam’s story!Appears in high school. school loveMarry a school friendRomance any of the characters, after the final he / she will become your spouse. adult storiesAdult characters are tied to locations. When you first enter a location with an adult, an introductory cutscene is shown, and then you have the opportunity to learn a new hobby. To complete the story of an adult, you need to open the last step of the hobby (it is not necessary to master by adding to the schedule). At the moment of opening, the final scene will take place with the participation of an adult, and the achievement will open. Stone headComplete the coach’s story!In the gym of the school or in the stadium, locations open according to the plot. Real jamComplete Mei’s story!In the cafe “Checkers”, the location opens according to the plot. In the eye of the beholderComplete Elliot’s story!In the shopping center “Northern Pines”, the location opens according to the plot. In some realmComplete Sergio’s story!In the park “Ghostly surface”, the location opens according to the plot. Game of the YearComplete the story of the Nile!In Starkad 80. Location will open: • in the course of the passage, if friends have Alex, Richard or Wendy. • if you ask your parents for a ticket to slot machines. New beginningsComplete Parul’s story!At the Museum of Art. Location will open: • in the course of the passage, if there is Alicia in friends. Protest!Complete Alessandra’s story!At the Orpheus Theatre. Location will open: • along the way, if friends have Richard or Wendy • if you buy a ticket in the mall. Camera! Motor!Complete Luka’s story!In “Kinokompleks 8”. The location will open in the course of the passage if Richard, Wendy or Nathan are friends. HeartbeatComplete Ming’s story!At the Zone Club. Location will open: • along the way, if friends have Richard, Vivica or Nathan. • if you buy a ticket in the mall. Serve and protectComplete Riley’s story!At the police station. The location opens only if the game has added Wendy or Vivica as possible friends. • Wendy – in high school (after skipping classes and going to the cinema) • Vivica – in high school (when she gets noticed for drawing graffiti). There are no other ways to open the location, or I did not find them. ## Achievements – Miscellaneous Dependence on parentsSpend 3500 Food pride in asking.For one pass. Fulfill all expectations in a row, accumulating enough pride points is not a problem. Start begging from elementary school, otherwise the ancestors may run out of gifts before they reach the amount of 3500. We were never friendsLose a friend.Start ignoring any of your friends. Can be combined with the “Singleman” achievement. Young entrepreneurParticipate in work activities 50 times.For one pass. Talented studentIncrease the passive growth of any attribute to 50.Collect attribute hexes and Rainbow hexes (bonus to all skills), accumulate without problems. Ready for a fightMaster SAT activity.Master the penultimate level in any of the school subjects, after which only SAT II remains. ГурманTry all the dishes at least once.Buy all available food in locations: School canteen, Checkers Cafe, La Royale Restaurant, Miracle Fair. There is one dish that you can accidentally miss – Apple Pie in the Checkers Cafe, for some reason it is only sold during high school, then disappears. methodical studyGet fully prepared for all exams.The problem can arise only with the very first exam (in kindergarten), only 2 moves are given to prepare for it. To be sure to get 100% preparation for the first exam, you need to develop a character from birth along the path of Empathy and Imagination, since these skills are mainly used in kindergarten. If you go for Constitution (climbing cabinets), preparation may not be enough. For all other exams, much more moves are given for preparation, there should be no further problems. Sleepless nightsGet an A on all exams.Raise the preparation to 100 to get the maximum moves. Try to collect 6+ cells together (on the final exam 8+ cells) to get amplifiers. Do not spend all the moves at once, before choosing the answers, leave 5 moves in stock in order to get the cells of the missing color if necessary. The exam can be restarted at any time until the “End exam” button is pressed, to do this, return to the menu and load the save. Below are more detailed exam tips. No cheat sheetsUse 10 Eureka Amplifiers during the final SAT exam.Eureka booster appears for collecting 8+ cells with one click. Raise your preparation for the final exam to 100 to get the maximum number of moves. Collect cells in groups of at least 8 pieces, remove groups of smaller numbers only if it helps to connect 8+ cells. Black FridayBuy a total of 50 items from different stores.For one pass. Any goods are considered, even carrot sticks in the school cafeteria. If the final exam is close, and 50 items have not been collected, you can buy all the clothes in the mall. resourceful studentUse Exam Boosters 50 times.Amplifier – a bonus that removes several cells at a time (bomb, clear a row, eureka amplifier). Every time you remove 6+ cells with one click, a new amplifier appears on the field. And what did you expect?Collect 15 Stickies.For one pass. Sticky spots appear on the Brain Map in high school. In a normal situation, there is no point in collecting them. To achieve the opposite, you need to start collecting stickies as soon as they start to appear on the map. By this time, gyrus points should be enough for such a useless waste. To quickly find sticking, use the “eye” and “open the whole map” neurons. ## Players The game randomly selects three characters from the list and adds as possible friends: 1 in kindergarten, 1 in middle school, 1 in high school. There is no dependence on stats and decisions, characters are always random. The only way to affect spawning is to continue playing as your character’s child after completing the story (the game never adds the same friends the parents had). But in a generation, repetitions will inevitably begin. ## Locations Locations in this game open in a strange way. Their opening is tied to the stories of friends, and you need the game to send you to the location according to the story. In most cases, there are alternative ways to open (buy a ticket or upgrade a skill), but there are places where it is impossible to get to if the game does not give out the required character as a friend. Sometimes this leads to absurd situations, for example, you can leave school without knowing where the school toilet and library are. Below are the conditions for unlocking. If the conditions “to have such and such a character as a friend” are met, the location will open as the story progresses. Cafe CheckersUnlocked at the beginning of High School. Shopping center “Northern pines”Unlocked at the beginning of High School. Park “Ghostly Expanse”Opens during the course of the story, with any of the characters. • Will open with any of the starting characters (Alex or Richard). Miracle Fair • If friends have Alex or Alicia. Restaurant “La Royale” • If Vivica is among your friends, you need to agree to dinner with her parents. • Develop Cooking, when learning the Fish Dishes skill, the restaurant will open on the map. Cinema complex 8If friends if there is Richard, Wendy or Nathan. Animal shelter • If friends have Nathan. • Develop the skill of empathy, choose the path of Zoology. When you learn the first Zoology skill (“Invertebrates”), the shelter will open on the map. HospitalIf your friends include Alex, Alicia, Felicity, Nathan or Vivica. Police stationIf your friends have Wendy or Vivica. • Wendy – lot opens in elementary school (after skipping classes and going to the cinema) • Vivica – Lot opens in high school (when she gets noticed for drawing graffiti). Museum of Art • If friends have Alicia. Theater of Orpheus • If your friends include Richard or Wendy. • Buy a ticket to the mall. Club “Zone” • If your friends include Richard, Vivica or Nathan. • Buy a ticket to the mall. Library (High School)If your friends include Alex, Richard, Wendy, Nathan, Vivica or Kato. • Richard, Wendy, Nathan, Vivica or Kato – will open automatically. • Alex – try to find it in the library. Toilet (High School)If your friends include Nathan, Alicia, Felicity, Vivica or Sam. There are two toilets in the school (M and F), if you have the necessary characters, you can unlock both (there is no benefit from this, the same cribs are sold in both). ## Useful purchases The most useful items are those that give extra pocket money and action points. Convolution points are less useful, they are already collected in sufficient quantities from the Brain map, but if there is extra money, why not buy it. Action points Pocket money If you start buying items that give a bonus to pocket money from middle school, already at the beginning of high school you can have a passive income of + $60 per turn, and not know the problems with money. Another way to get extra money is to master a job (for example, Chef’s Assistant). The result of any action in the game can be successful and unsuccessful (you can see it in the drawn scene during execution). For successful completion, the character receives an additional +50% to the attribute and money, and the mastered activity will always be completed successfully. For example, in the case of a Chef Assistant, it would work like this: the base pay is$15, and the pay per skill is $23. gyri glasses Graph cells ## Mood There are two scales in the game: Peace of Mind and Parent Satisfaction. At first glance, it may seem that balancing them is quite difficult, since by increasing one, you evenly decrease the other. Ancestors are infuriated by everything in the world, if it is not training or work. Sometimes they can set you up with absurd expectations like “go shopping, there’s nothing to sit at home,” but this will also infuriate them, because shopping is entertainment. How to be? There are several tactics: • Buying food. For some reason, this is the only way to raise the mood without a penalty. Food can be bought at the school cafeteria, Checkers Cafe or La Royale Restaurant. • Job. Doesn’t affect Parental Satisfaction in any way (if there is no work-related Expectation) and lowers mood. In doing so, she provides much-needed pocket money that can be spent on buying food, restoring a lost mood, and still staying in the black. The more free action points, the more benefits you can get from the work. • Journey. Increases mood significantly less than food, plus spends the rest of the turn (you can not use the schedule). Compensates for this a bit by giving extra attribute points. Don’t forget to use up all your Action Points before traveling. • Fulfillment of parents’ expectations. The bonus to Satisfaction is small, it makes sense to complete them sooner in order to earn Pride, which you can spend later on something useful. If there is some kind of expectation that doesn’t suit you at all (like, repeat the already learned skill 10 times), you can simply not do it, the penalty is also small. • Temporary increase in Parental Satisfaction. It is not necessary to constantly keep it in the green zone (70+), you can only increase it before you have to buy a Special Request. When you have accumulated the necessary amount of Pride, be a model child for a couple of turns, doing only your studies, you will lose your mood, but you will be able to buy what you need. Then raise your spirits back. • Maintain high Parental Satisfaction. Remember that it cannot rise above 100. If you have raised Satisfaction close to this level, you can safely add costly entertainment activities to the schedule (prefix, tabletops, etc.). For example, the first actions put in the schedule “Desktop RPG” (+8 moods), the rest is filled with training. Satisfaction will drop, and then immediately come back. Bonuses from high Parent Satisfaction and Peace of Mind. • Parent satisfaction above 70 gives an additional +20% bonus to pocket money at the end of the turn. If your income is +$5 per turn, this will not play a role, but with +\$50 it is already more interesting. • Peace of mind above 70 grants an extra graph slot, useful in elementary and middle school. In high school, the number of cells will increase to 7 naturally, plus by this time you will most likely have bought a Daily Planner or a Guide to Polyphasic Sleep, which will give a total of 8 cells, regardless of mood. ## Brain Map: Tactics The most priority points on the Brain map: 1. Super insight (opens the entire map). Grab it as soon as you see it. 2. Points of convolutions. Recovers spent Convolution Points and shows adjacent neurons for free. Just make sure that the points are not wasted (you cannot accumulate more than the maximum). 3. Hex Action Points. Increases maximum AP by +5. 4. Hex convolutions. Increases Max Convolution Points by +5, works from the next Brain Map. 5. Illumination. Opens neurons around in a radius of 2 to 4, see description. If it opens radius 4, take it, if 3 – look at the situation. With a radius of 2, it is almost always useless. 6. Next card. You don’t need to take it right away, but open a passage to it in advance. 7. Rainbow Hex. Adds +15 to all attributes (+75 in total) and increases the passive bonus of all attributes by +1. 8. Attribute hexes (+17-30 to attribute and +1-2 to passive bonus, depends on character’s age). If you are developing a specialized character, you can take only your main attribute, ignore the rest. If you want to develop comprehensively, collect all hexes in a row and generally ignore small neurons, the character will still raise skills very much due to the passive bonus. Everything else makes sense to take only if it lies on the way to useful points, or the useful points are over. 9. Shock wave. Passes by collecting the contents of 5-10 adjacent neurons (see description) and shows adjacent neurons. It makes sense to take on an unexplored map to increase the visible area. On the explored areas of the map is of little use. A shock wave located at a dead end can get stuck and not collect anything. 10. Teleport. It makes sense to take only if there are valuable points located far away on the map, it gives a chance to get close to them faster, without opening unnecessary ones. 11. If there is nothing useful left, take any neuron connected to the maximum number of other neurons (preferably 6). 12. Uber attribute neuron (collects all neurons of the same type).Not bad, but he shows up too late in high school, by which time he’s not really needed anymore. Without Super Insight, which reveals the entire map, it is of little use. 13. Knowledge, super knowledge. Of little use, the character already gets a lot of knowledge from mastering skills. The only option is to collect them if you urgently need Knowledge, but there is not enough money to buy cheat sheets and junk from the fair. 14. Surprise. The only option is to take it – the whole map is open, and there is no transition from it to the next one. There is an extremely small chance (about 10%) to randomize the transition, mostly useless junk drops out. 15. Hex of knowledge (-10% to the cost of all available skills). The most useless thing. The cost of a skill cannot fall below the minimum cost, and if you take this hex with the expectation of getting a discount on an expensive skill, then subsequent skills in the branch will still remain expensive, and it is more reasonable to download the corresponding attribute to lower the cost. ## Exams and endings The ending depends on two factors: which skill tree the character has progressed the farthest, and what grade is received on the final exam. If several branches of skills are mastered, the profession related to the first of the mastered skills will be selected in the final. For example, fully mastered Zoology and Cooking, Zoology mastered first, the character will follow the path of veterinary medicine. The grade for the final exam (A, B, C, D, F) affects the level in the obtained career. A is the top (Olympic medalist, bestselling author, all that), F is nothing outstanding (always an office clerk, regardless of skills). It is impossible to become a homeless person or a drug addict, although I tried very hard. There are only 4 exams (kindergarten, elementary school, middle school, and final in high school), only the final one affects the final profession. Tips for those who have a bad time with Candy Crush: • Raise the preparation to 100 to get the maximum moves. The level of preparation is influenced by the study and development of school skills (hobbies are not affected). You can get about 70-90 preparations just by “buying” skills, up to 100 you have to master. • During your high school years, you can learn skills in the library using Action Points. Use if you need to speed up your preparation for the final exam. • Collect 6+ cells together to get boosters, remove small groups only if it helps to unite a large bunch of cells. • On the final exam, match 8+ cells together to get Eureka boosters (allowing you to remove all cells of the same color from the field). If there are many cells of the same color as the Eureka amplifier on the field – use it, if not enough – do not touch it yet, wait until the cells accumulate. • Do not spend all the moves at once, before choosing the answers, leave 5 moves in stock in order to get the cells of the missing color if necessary. • The exam can be restarted. At any time, until the “End exam” button is pressed, you can return to the menu, select “Continue” and the exam will start from the very beginning. Meet this very stupid guy Jimmy. He managed to finish school without learning to write and count. All his life he kicked the bullshit, skipped classes and occasionally worked at a local diner as an assistant cook, hoping to get rich and save up for a yellow Ford Mustang. As a result, logically, he passed the final exam for the stake, and now works as an office clerk. Don’t be like Jimmy. Check Also Close
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1945815533399582, "perplexity": 3428.8186930947795}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943471.24/warc/CC-MAIN-20230320083513-20230320113513-00588.warc.gz"}
http://www.aanda.org/articles/aa/abs/2004/04/aah4745/aah4745.html
Free Access Issue A&A Volume 414, Number 1, January IV 2004 155 - 161 Galactic structure and dynamics https://doi.org/10.1051/0004-6361:20031635 A&A 414, 155-161 (2004) DOI: 10.1051/0004-6361:20031635 ## The high energy emission line spectrum of NGC 1068 G. Matt1, S. Bianchi1, M. Guainazzi2 and S. Molendi3 1  Dipartimento di Fisica, Università degli Studi Roma Tre, via della Vasca Navale 84, 00146 Roma, Italy 2  XMM-Newton Science Operation Center/RSSD-ESA, Villafranca del Castillo, Spain 3  IASF-CNR, Via Bassini 15, 20133, Milano, Italy (Received 21 August 2003 / Accepted 17 October 2003 ) Abstract We present and discuss the high energy ( E>4 keV) XMM-Newton spectrum of the Seyfert 2 galaxy, NGC 1068. Possible evidence for flux variability in both the neutral and ionized reflectors with respect to a BeppoSAX observation taken 3.5 years before is found. Several Fe and Ni emission lines, from both neutral and highly ionized material, are detected. The intensity of the iron K Compton shoulder implies that the neutral reflector is Compton-thick, likely the visible inner wall of the cm -2 absorber. From the equivalent width of the ionized iron lines a column density of a few cm -2 is deduced for the hot ionized reflector. Finally, an iron (nickel) overabundance, when compared to solar values, of about 2 (4) with respect to lower  Z elements, is found. Key words: galaxies: individual: NGC 1068 -- galaxies: Seyfert -- X-rays: galaxies Offprint request: G. Matt, [email protected]
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8449661731719971, "perplexity": 7293.685989842837}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171463.89/warc/CC-MAIN-20170219104611-00206-ip-10-171-10-108.ec2.internal.warc.gz"}
http://www.koreascience.or.kr/article/ArticleFullRecord.jsp?cn=DBSHBB_2010_v47n2_425
Lp ESTIMATES FOR SCHRÖDINGER TYPE OPERATORS ON THE HEISENBERG GROUP Title & Authors Lp ESTIMATES FOR SCHRÖDINGER TYPE OPERATORS ON THE HEISENBERG GROUP Yu, Liu; Abstract We investigate the Schr$\small{\ddot{o}}$dinger type operator $\small{H_2\;=\;(-\Delta_{\mathbb{H}^n})^2+V^2}$ on the Heisenberg group $\small{\mathbb{H}^n}$, where $\small{\Delta_{\mathbb{H}^n}}$ is the sublaplacian and the nonnegative potential V belongs to the reverse H$\small{\ddot{o}}$lder class $\small{B_q}$ for $\small{q\geq\frac{Q}{2}}$, where Q is the homogeneous dimension of $\small{\mathbb{H}^n}$. We shall establish the estimates of the fundamental solution for the operator $\small{H_2}$ and obtain the $\small{L^p}$ estimates for the operator $\small{\nabla^4_{\mathbb{H}^n}H^{-1}_2}$, where $\small{\nabla_{\mathbb{H}^n}}$ is the gradient operator on $\small{\mathbb{H}^n}$. Keywords Heisenberg group;Schr$\small{\ddot{o}}$dinger operators;reverse H$\small{\ddot{o}}$lder class; Language English Cited by 1. Some estimates for commutators of Riesz transforms associated with Schrödinger operators, Journal of Mathematical Analysis and Applications, 2014, 419, 1, 298 2. The Higher Order Riesz Transform andBMOType Space Associated with Schrödinger Operators on Stratified Lie Groups, Journal of Function Spaces and Applications, 2013, 2013, 1 3. Hardy Type Estimates for Riesz Transforms Associated with Schr?dinger Operators on the Heisenberg Group, Pure Mathematics, 2015, 05, 06, 291 4. Some estimates for commutators of Riesz transform associated with Schrödinger type operators, Czechoslovak Mathematical Journal, 2016, 66, 1, 169 References 1. C. Benson, A. H. Dooley, and G. Ratcliff, Fundamental solutions for powers of the Heisenberg sub-Laplacian, Illinois J. Math. 37 (1993), no. 3, 455–476. 2. C. Berenstein, D. C. Chang, and J. Z. Tie, Laguerre Calculus And its Applications on The Heisenberg Group, American Mathematical Society, International press, 2001. 3. T. Coulhon, D. Muller, and J. Zienkiewicz, About Riesz transforms on the Heisenberg groups, Math. Ann. 305 (1996), no. 2, 369–379. 4. C. Fefferman, The uncertainty principle, Bull. Amer. Math. Soc. (N.S.) 9 (1983), no. 2, 129–206. 5. G. B. Folland and E. M. Stein, Hardy Spaces on Homogeneous Groups, Mathematical Notes, 28. Princeton University Press, Princeton, N.J.; University of Tokyo Press, Tokyo, 1982. 6. F. Gehring, The $L^p$-integrability of the partial derivatives of a quasiconformal mapping, Acta Math. 130 (1973), 265–277. 7. D. Jerison and A. Sanchez-Calle, Estimates for the heat kernel for a sum of squares of vector fields, Indiana Univ. Math. J. 35 (1986), no. 4, 835–854. 8. H. Q. Li, Estimations $L^p$ des operateurs de Schr¨odinger sur les groupes nilpotents, J. Funct. Anal. 161 (1999), no. 1, 152–218. 9. N. Lohoue, Estimees de type Hardy-Sobolev sur certaines varietes riemanniennes et les groupes de Lie, J. Funct. Anal. 112 (1993), no. 1, 121–158. 10. G. Z. Lu, A Fefferman-Phong type inequality for degenerate vector fields and applications, Panamer. Math. J. 6 (1996), no. 4, 37–57. 11. Z. W. Shen, $L^p$ estimates for Schrodinger operators with certain potentials, Ann. Inst. Fourier (Grenoble) 45 (1995), no. 2, 513–546. 12. Z. W. Shen, On the Neumann problem for Schrodinger operators in Lipschitz domains, Indiana Univ. Math. J. 43 (1994), no. 1, 143–176. 13. S. Sugano, $L^p$ estimates for some Schrodinger operators and a Calderon-Zygmund operator of Schrodinger type, Tokyo J. Math. 30 (2007), no. 1, 179–197. 14. J. P. Zhong, Harmonic analysis for some Schr¨odinger type operators, Ph. D. Princeton University, 1993.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 15, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8690233826637268, "perplexity": 671.2243079264294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806455.2/warc/CC-MAIN-20171122031440-20171122051440-00479.warc.gz"}
http://www.maplesoft.com/support/help/MapleSim/view.aspx?path=componentLibrary/signalBlocks/discrete/TransferFunction
Discrete Transfer Function - MapleSim Help Discrete Transfer Function Discrete Transfer Function block Description The Discrete Transfer Function (or Transfer Function) component defines the transfer function between the input signal $u$ and the output signal $y$. $\frac{y\left(z\right)}{u\left(z\right)}=\frac{{b}_{1}{z}^{{n}_{b}-1}+{b}_{2}{z}^{{n}_{b}-2}+\cdots +{b}_{{n}_{b}}}{{a}_{1}{z}^{{n}_{a}-1}+{a}_{2}{z}^{{n}_{a}-2}+\cdots +{a}_{{n}_{a}}}$ State variables, $x$, are defined according to controller canonical form. Initial values of the states can be set as start values of $x$. Connections Name Description Modelica ID $u$ Continuous input signal u $y$ Continuous output signal y Parameters Name Default Units Description Modelica ID b $\left[1\right]$ Numerator coefficients of transfer function b a $\left[1\right]$ Denominator coefficients of transfer function a Sample Period $0.1$ $s$ Sample period of component samplePeriod ${T}_{0}$ $0$ $s$ First sample time instant startTime Modelica Standard Library The component described in this topic is from the Modelica Standard Library. To view the original documentation, which includes author and copyright information, click here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 14, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7860283851623535, "perplexity": 2783.0894585549486}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423486.26/warc/CC-MAIN-20170720201925-20170720221925-00145.warc.gz"}
http://www.scienceforums.com/topic/21373-the-final-piece-of-the-puzzle/page-2
# The Final Piece Of The Puzzle! 129 replies to this topic ### #18 modest modest Creating • Members • 4959 posts Posted 22 April 2010 - 03:38 AM In such a case, I can write [imath]\Phi(\vec{x})[/imath] in a very simple form consistent with standard notation: $\Phi(\vec{x})=-\frac{\kappa M}{r}$ where [imath]\kappa[/imath] is the proper proportionality constant to yield the correct potential generated by the mass M. I was just reading the OP and it's getting late and I had a question, so figured it would be a good place to stop for the night. If your wave equation propagates in 4 dimensional Euclidean space (an impression I got from the special relativity thread) then the vector field causing the refractive index, n, would presumably be an inverse (n-1)th conservative force (F $\propto$ 1/r3). That force, I would think, would be the derivative of Phi making Phi proportional to 1/r2 (in contrast to normal Newtonian gravitational potential) so I'm not quite following how or why you deduced Phi $\propto$ 1/r. Are ds, c, and the geodesic in four dimensions? If so, how have I gone wrong? Thank you, ~modest ### #19 coldcreation coldcreation Resident Bright • Members • 1577 posts Posted 22 April 2010 - 12:08 PM Coldcreation, I really wish I knew why you feel compelled to post to this thread. You obviously have not the first hint of what I am talking about. There's another erroneous presumption I guess we should add to the list. The reason why I feel compelled to post to this thread is simply that it intrigues me why someone, with the background you have, would see the need to overhaul an aspect of a theory—Einstein's concept of gravity as a curved spacetime phenomenon—by replacing it with a concept (Euclidean space) that has shown not to be tenable. Indeed, it is precisely the concept of gravity as curvature that did away with the notions erroneous within that which it replaced (Newtonian gravitation), and the idea that space was Euclidean, independent of time, and absolute (in the sense that there existed a unique gods-eye reference frame). Besides quantum theory there have been few theories produced since the beginning of modern science that have a better track record, i.e., there is excellent corroboration with empirical observations and experimental data related to phenomena such as gravitational time dilation and frequency shift, light deflection and gravitational time delay, orbital effects and the relativity of direction (precession of apsides, orbital decay, geodetic precession and frame-dragging). Nearly being the key word, it seems that a modification, if at all, rather than a complete overhaul, would be more appropriate, in order to clear up the discrepancies with GR and that which is observed and theorized to be operational on meso- and microscopic scales. What makes you think that gravity is not due to a curved spacetime manifold? That “almost” is there for honesty. There exist aspects of modern physics which I have not yet proved are indeed implied by my fundamental equation. It is not an easy equation to solve. [...] and finally, (as the last piece of the puzzle of identifying constraints implied by my equation with supposed conclusions of modern physics) showing that Einstein's theory of General Relativity is essentially an approximate solution to my fundamental equation. I have not yet achieved “complete corroboration with all empirical evidence” but I have certainly deduced enough to convince me that modern physics is a tautology (as it is quite clear that my fundamental equation is certainly a tautology). As I said, it is not exactly an easy equation to solve. [...] Yes, and I will make a prediction: there probably exists other empirical evidence which can be shown to be approximate solutions to my fundamental equation. I was referring to your statements: "The fact that my result is not exactly the same as that obtained from Einstein's theory is not too troubling. And: "It appears (at least to me at the moment) that the effect of that extra term in my solution is to make the gravitational field appear to be slightly stronger than estimated via Einstein's field theory. If that conclusion is correct, then it could also explain the “dark matter” problem." If there are indeed differences that you describe perhaps you should center your predictions around those. For starters you could attempt to predict rotational curves without cold dark matter for galaxies more accurately than GR with CDM... That would be one giant leap for mankind. Of course you do because you have absolute faith in the validity of the standard “by guess and by golly” approach used by modern physics. Actually I do not. I've been known on occasion to display a dislike for certain mainstream (and certain less mainstream) physical theories (e.g., inflation, with its false vacuum, strings with its 21 dimensions, branes, even the big bang theory with its CDM, DE and repulsive force: lambda). For example, Einstein's general relativity tells us many things about the physical universe, 'reality' (and is thus not a religion). Yeah, I know! That is exactly what you are convinced of and you would like the rest of us to believe. I say it is no more than another consequence of internal self consistency of the argument (but as I get something a little different, either I have made a deductive error, which is certainly possible, or Einstein's theory is not internally consistent). The fact that no one has succeeded in presenting a quantized version of that theory is somewhat indicative of its failure to be internally consistent. Each is entitled to 'believe' that which pleases him or her. As it stands, GR is in empirical agreement with observations That is what makes it compelling (nothing to do with belief). Again, if you can make a prediction to within a more accurate decimal place, please do so. Your tau axis seems more akin to some belief system. You did after all write of it; "tau is a complete fabrication of our imagination" Your hypothesis is not solely that "an explanation must be internally self consistent!" There is more involved than just that. You clearly have never read the proof as, if you had, you could explicitly point out the other hypotheses. Maybe there's a better word than hypothesis: contentions, or presumptions for example. One example is that you feel Euclidean space is more consistent a schema than curved spacetime. Your definition, or interpretation, of time is another. The idea that gravity is a pseudo-force is yet another. Not to mention tau again. These are all part of your hypothesis, or contention(s). As I have commented earlier, you seem to have utterly no comprehension of what puzzles are under discussion. And, as an aside, if you were at all aware of my proof you would clearly comprehend the nature of that tau axis. [snip] That would be a good place to start clearing up what people find confusing about your proof. To what extent do you feel tau is relevant to the natural world. How can the concept of tau be tested? Why do you need to include tau in your equations? What would be the consequences if it were to be removed? PS. Your OP would have been easier to follow had you started with an abstract, and ended with a conclusion (at least for those who have not sifted through all your other threads). CC ### #20 Doctordick Doctordick Explaining • Members • 1043 posts Posted 22 April 2010 - 07:46 PM Thank you modest for taking the trouble to think about what I have said. If your wave equation propagates in 4 dimensional Euclidean space (an impression I got from the special relativity thread) then the vector field causing the refractive index, n, would presumably be an inverse (n-1)th conservative force (F $\propto$ 1/r3). That force, I would think, would be the derivative of Phi making Phi proportional to 1/r2 (in contrast to normal Newtonian gravitational potential) so I'm not quite following how or why you deduced Phi $\propto$ 1/r. Your assertion concerning the radial form of a conservative force is mathematically true in a Euclidean geometry in the Newtonian picture as it is intimately related to the way the surface area changes with respect to the radius. The effect is easily seen in a simplified notion of photon exchange forces. Exchange forces are attributed to momentum exchanges mediated by the exchange of virtual particles. The force (i.e., that change in momentum) is directly related to the probability that a given virtual particle will be exchanged and that probability is in turn proportional to the cross section of the interaction as seen by the interacting bodies: i.e., the same spacial density of interacting virtual particles from a given source, as seen from a more distant object, declines as the area of the encompassing sphere increases. That area is proportional to r squared and therefore the magnitude of the force becomes inverse to r squared. And you are correct. In a four dimensional Euclidean geometry in the Newtonian picture a conservative force would be proportional to the inverse of r cubed, yielding a potential proportional to the inverse of r squared. However, that is not what we have here. In this case, both interacting bodies are momentum quantized in the tau direction. The whole circumstance yields utterly no variation in the physical probability density of these interacting bodies (or the density of the virtual exchange particles) in the tau direction. This situation essentially projects out the tau dimension and all forces and potentials come back to the three dimensional dynamics (except for the subtle consequences of that momentum in the tau direction: i.e., what we call mass). Are ds, c, and the geodesic in four dimensions? If so, how have I gone wrong? Yes, ds and c must both be evaluated in the four dimensional space as they are exactly that subtle consequence of momentum in the tau direction of which I spoke. Mass, the quantization of momentum in the tau direction, and likewise the tau component of its path and the tau component of its velocity do not change in the tau direction thus the existence of the tau dimension can not be ignored with respect to these terms. I hope that makes things a little clearer. As I have commented elsewhere, I am currently working on a post of my proof of my fundamental equation and I think that post will clear up a lot of these kinds of questions. The reason why I feel compelled to post to this thread is simply that it intrigues me why someone, with the background you have, would see the need to overhaul an aspect of a theory—Einstein's concept of gravity as a curved spacetime phenomenon—by replacing it with a concept (Euclidean space) that has shown not to be tenable. Again you demonstrate that you do not have even the slightest idea as to what I am doing. I have no compulsion to overhaul any aspect of Einstein's theory; my only purpose is to endeavor to find the consequences of my proof. Since you have utterly no idea as to why my fundamental equation must be valid, your comments are totally off the subject. I won't put you on my ignore list because you appear to have a little education; however, I don't think I will respond to your posts until I have a little more evidence that you are trying to understand this stuff. Have fun -- Dick ### #21 IDMclean IDMclean A Person • Members • 1670 posts Posted 22 April 2010 - 08:23 PM To CC, One of the points DD's consistently made in this thread and in others is that his system and proofs show that physics is a logical tautology deriving from his fundamental equation. As far as I can tell, DD's perspective holds that this is a sort of disproof of physics as anything other than a mental construction of the human kind having little to do with the machinations of physical world beyond our interpretation of it. Seems, DD has different aims in this respect than your average scientist. He's out to disprove the independence of physics and scientific method as it stands from philosophical and logical concerns by showing that it is in fact equivalent to any other axiomatic system we're familiar with. As such, it would constitute a logical argument against science as independent of the concerns of philosophy. "There is no such thing as philosophy-free science; there is only science whose philosophical baggage is taken on board without examination." —Daniel Dennett, Darwin's Dangerous Idea, 1995. DD, do correct me if I'm wrong in my interpretation of your argument, but this is the gist I get from what I've read so far of your work. ### #22 Doctordick Doctordick Explaining • Members • 1043 posts Posted 23 April 2010 - 12:00 AM Well Mclean, I am impressed. I was kind of unimpressed by the character “KickAssClown”, but your recent posts have led me to believe there are possibilities there. Your post number 21 to this thread seems to indicate a rather intelligent approach. I have no real argument with that post at all. All I have is a few subtle adjustments to your perspective. One of the points DD's consistently made in this thread and in others is that his system and proofs show that physics is a logical tautology deriving from his fundamental equation. Absolutely and incontrovertibly true. As far as I can tell, DD's perspective holds that this is a sort of disproof of physics as anything other than a mental construction of the human kind having little to do with the machinations of physical world beyond our interpretation of it. Essentially true; however, there are a number of conclusions which might be drawn from that statement that really kind of misrepresent the situation. Seems, DD has different aims in this respect than your average scientist. He's out to disprove the independence of physics and scientific method as it stands from philosophical and logical concerns by showing that it is in fact equivalent to any other axiomatic system we're familiar with. As such, it would constitute a logical argument against science as independent of the concerns of philosophy. Again an essentially correct statement; however, it seems to imply an attitude which is somewhat askew of what I actually have in mind. To quote post #14 I made to this thread I discovered the proof of my fundamental equation when I was still a graduate student back in the sixties but it seemed pretty worthless because I couldn't solve the equation. I discovered the first solution around 1983 and after some work, I attempted to publish my proof about twenty years ago. I don't think it made it past any of the referees. The physicists said it was philosophy, the philosophers said it was mathematics and the mathematicians said it was physics. I have since come to the conclusion that it is indeed philosophy, that is why I am posting to the “Philosophy of Science” forum. Certainly it has absolutely nothing to say about mathematics as nothing in my work yields anything new to the field of mathematics. And, essentially, my work has nothing to say about physics (except perhaps the fact that Einstein's theory of General Relativity has some problems) as, for all practical circumstances, it essentially confirms most all of modern physics. Just as an aside with regard to the confirmation of modern physics, Newtons orbital calculations essentially (except for a few exceptions) confirmed the charts prepared through the “cycle and epicycle” theory of Claudius Ptolomy (which, today, is clearly seen as no more than a mathematical mechanism for cataloging cosmic positions). Newton admitted the possibility of error in his calculations; however, as it turned out the differences were errors on the part of the astronomers citing the actual data. Likewise, Einstein's theory could still be correct as it is entirely possible that I have made an error in my algebra. On the other hand, perhaps Einstein is wrong. Time will tell. The reason I bring this up is the fact that I really have no complaint with physics. Considering the problems they have solved and the foundations they have to work with, they have done an excellent job. It is all based upon the presumption that the “by guess and by golly” attack is the only attack available to them. My real complaint is with the field of philosophy. It is the philosophers who have dropped the ball here. They have taken the issues underlying the fundamental questions of interest and lathered them over with gobs and gobs of esoteric bullshit. It is time that philosophers became a little more exact in their analyses. I think I have kind of thrown out the ball there but, except for Anssi, have seen no rational reaction. Philosophy was once considered the Queen of all scientific investigations. But no more. Today philosophy is considered to be a field totally concerned only with stirring bullshit and that is a sad report on their efforts over the last three thousand years. "There is no such thing as philosophy-free science; there is only science whose philosophical baggage is taken on board without examination." —Daniel Dennett, Darwin's Dangerous Idea, 1995. I couldn't put it any better myself (though I don't think I would call evolution a “dangerous idea”). For what it is worth, consider yourself corrected. Have fun -- Dick ### #23 modest modest Creating • Members • 4959 posts Posted 23 April 2010 - 12:36 AM In this case, both interacting bodies are momentum quantized in the tau direction. The whole circumstance yields utterly no variation in the physical probability density of these interacting bodies (or the density of the virtual exchange particles) in the tau direction. This situation essentially projects out the tau dimension and all forces and potentials come back to the three dimensional dynamics (except for the subtle consequences of that momentum in the tau direction: i.e., what we call mass). That makes sense. I think I had read previously, but forgot, the consequences of uncertainty in tau being infinite. Thank you for the explanation. I'll pick up where I left off when the girlfriend eases up on the whip (metaphorically speaking ) ~modest Understanding • Members • 1209 posts Posted 23 April 2010 - 01:31 PM I have a comment and a question about tau dimension. I have since come to the conclusion that it [the Fundamental Equation] is indeed philosophy, that is why I am posting to the “Philosophy of Science” forum Well, this is exactly what I have been trying to explain to DD for over one year now. And, I have presented why his Fundamental Equation is important to philosophy. When DD indicated a few posts ago that he agrees with the comment that ....physics is a logical tautology deriving from his fundamental equation...I have suggested that the reason is because his Fundamental Equation is derived from "tautology itself"---that is, from the philosophic Law of Identity, which was presented by Aristotle > 2000 years ago as the ultimate Fundamental Equation A = A ! This is why I agree with DD that his Fundamental Equation has great philosophic importance. It shows how A = A (which is a statement about ontology) can be transformed into an isomorphic equation [the DD Fundamental Equation] that is a statement about epistemology (explanation itself). Imo, this is the final piece of the "philosophic puzzle"--the DD Fundamental Equation completes the thinking of Aristotle about the "mathematical" relationship between existence and knowledge--it puts philosophy on a sound mathematical basis. But of course--only if one accepts the Law of Identity as a valid premise. == My question is about the tau dimension used by DD. Now, DD indicates that "mass" is derived from his tau dimension--yet he also indicated (I do believe) that this dimension is "abstract". So, I recall a similar situation with use of the Schroedinger Equation as relates to shell structure in the atomic nucleus for nucleons. This model predicts (using the Schroedinger Equation) the interactions of independent nucleons as if they are being "acted on" by an energy potential (V) that is an abstract dimension related to a central harmonic energy well. That is, the "mass" of the nucleons is a function of this interaction with an abstract concept. So my question--is this what DD is saying with the relationship between "mass" and abstract "tau" ? Is his claim about mass & tau the same philosophic relationship as between the mass of a nucleon and abstract central energy well within the nucleus as a whole as predicted by quantum mechanics via use of Schroedinger Equation ? If yes, then what DD is claiming makes perfect sense to me. If no, please explain why. ### #25 AnssiH AnssiH Understanding • Members • 773 posts Posted 29 April 2010 - 05:23 PM Hi, I had not noticed you had posted this thing (Didn't pick up on the title I guess I'll try to get around to start a walk through soon, probably this weekend. But first one comment. It would be very nice if the discussion about the underlying issues would be held here; Simply because it will make it easier to follow and backtrack this thread if the responses are more strictly about the derivation of general relativity. It is of course okay to make small comments, but long conversations about the underlying issues just make it time consuming for me to find information about the actual derivation later. Following that note, I'll post a little comment to Coldcreation to that thread. -Anssi ### #26 lawcat lawcat Explaining • Members • 768 posts Posted 30 April 2010 - 03:01 PM To CC, DD's perspective holds that this is a sort of disproof of physics as anything other than a mental construction of the human kind having little to do with the machinations of physical world beyond our interpretation of it. . I agree that indeed that is the DD's conclusion. However, it is a rather dangerous conclusion. DD negates experience. All that exists is sets and logic. DD concludes that physics is nothing but entertainment. in other words, physics is math which is entertainment of the mind. Only sets and relationships between sets exist, only math exist. Experience may exist but we know nothing of it. His central premise is we know nothing other than that which we create in our mind. While enticing, this is rather illogical imo. Because the very sets and logic that give rise to mathematics are the product of experience of ourselves. If we do not trust ourselves, our experience, then we can not even begin to trust the sets. Only nothing would be valid. I find DD's equation scientific, but his conclusions dangerous because they negate experience. From the bottom up, DD's equation is scientific as a synthesis; not an analysis. It is no different than mathematically synthesizing any other set of sums into a single expression. To that end I agree with DD that it is falsifiable as a mathematical expression. As has been posted by others, if there exists one valid theory inconsistent with DD's equation, then the equation is falsifiable. From the top down, his analysis of the equation is purely logical. He sets up some definitions and analyzes those to come up with the equation. To the extent that the specified elements are elements of DD's space, they appear valid--no different then any other valid set. But what of this equation? In essence, so what? Of course we invented math. We certainly did not mine it in a cave. ### #27 AnssiH AnssiH Understanding • Members • 773 posts Posted 01 May 2010 - 01:11 PM Okay, let's get to it... Relativity is the mathematical transformation between two different geometric coordinate systems. Back in Newton's day, such transformations were quite straight forward as Euclidean coordinate systems were assumed applicable to reality. If the origin of a Euclidean coordinate system (coordinate system “b”) was at point (x0,y0,z0) in the original coordinate system (coordinate system “a”) then any point in coordinate system “a”, say point (xa,ya,za) was simply represented by the point (xa-x0,ya-y0,za-z0) in coordinate system “b”. This transformation was exactly the same even if the point being referred to as (x0,y0,z0) was moving in any arbitrary manner. I.e. even if the coordinate system b was moving inside the coordinate system a in an arbitrary manner. The difference between my system and Euclid's original system is that tau axis. Momentum quantization along that tau axis (mass) introduces some very subtle consequences when it comes to physical measurements such that the simple transformation above does not yield the measurements as taken by a person at rest in the moving system: i.e., special relativity is a necessary part of such a transformation in order to compensate for the effects of momentum quantization along the tau axis. Since you mention momentum quantization along the tau axis, I stopped to think about that a bit. I am not sure why was it quantized instead of being a continuous variable. I suppose it had got something to do with the uncertainty in the position of tau being infinite (since the momentum is constant/known?), but my understanding is quite shaky when it comes to the details of how the uncertaintly principle plays out here. Also, I suspect this issue related to the use of dirac constant in the definition of mass operator [imath]-i\frac{\hbar}{c}\frac{\partial}{\partial \tau}[/imath]...? Nevertheless, yes I remember how special relativistic transformation came into play. . . . The question then arises, how is it that Einstein's theory appears to circumvent Maupertuis' proof? The answer revolves around the principal of “least action” he invented as a means of calculating paths consistent with Newtonian physics (essentially minimizing the energy with respect to the path). Maupertuis showed that the problem was a consequence of the fact that, when it came to gravitational paths, different velocities led to different paths: i.e., two different objects behavior could not be reduced to geodesic motion in the same reference frame, something which must be true in the proper inertial frame. The inclusion of time in Einstein's space-time continuum allows this critical variation to be achieved; however, it turns out that this is exactly that same issue which creates the critical problems when it comes to “quantizing the gravitational field”. Thus it is that there are very real problems bringing quantum mechanics into Einstein's General Relativity theory. To date all attempts, that I am aware of, have resulted in failure. Yup, thank you about that whole explanation. It might be interesting to take a quick look at that Maupertuis' proof but I did not find it... Nevertheless, I have pretty good suspicion about how the inclusion of time axis and its relativistic transformation can get around proofs that rely on newtonian definition of time. I don't have very good idea about the problems with the traditional attempts of gravity quantization. Wikipedia only vaguely mentions problems with re-normalization. Compare this to my presentation which is totally consistent with quantum mechanics from the very beginning. Beyond that, in my presentation, mass is defined to be momentum in the tau direction of a four dimensional Euclidean geometry. As a consequence, that hypothetical (see as unexaminable) tau dimension can be simply scaled to make the velocity of every elemental entity through that four dimensional geometry look exactly the same. i.e. whatever velocity they are "missing" in the [imath]x,y,z[/imath]-axes, is to be attributed to their velocity along the [imath]\tau[/imath]-axis... That final fact totally circumvents Maupertuis' proof. It seems certainly reasonable to once again look at the consequences of general transformations and perhaps find that “non-inertial” geometry which yields gravity as a pseudo force. So let us proceed to examine some aspects of that circumstance. To begin with, my fundamental equation does indeed require a specific frame of reference: that frame of reference being at rest with respect to the entire universe. In that particular frame, (remember, that frame is a standard Euclidean frame) any object (any collection of fundamental elements forming a stable pattern) which can be seen as essentially not interacting with the rest of the universe (i.e., those interactions may be ignored and we are looking at a “free” object) will obey Newton's laws of motion in the absence of a force (it will not accelerate in any way). That is, the only forces which appear in such a circumstance are those pseudo forces we want to examine in this analysis: i.e., apparent forces which are entirely due to the fact that our coordinate system is not at rest with respect to the universe. True, we have created a situation which obviously circumvents Maupertuis' proof... Hmmm, well not very obvious to me, as I don't know how his proof worked exactly... I'm guessing what you are referring to is that, since Maupertuis' proof had something to do with the inability to "reduce two different objects' behaviour to geodesic motion in the same reference frame", this presentation form with a [imath]\tau[/imath]-axis (that gets "projected out"), will allow two different objects with different velocities in [imath]x,y,z[/imath] space to have the same total velocities when the velocity along [imath]\tau[/imath] is included. Not really sure yet how that will circumvent the proof. I understand though, that in the standard GR presentation form, the objects in gravitational free fall are following geodesic paths in the general relativistic coordinate system. but, since energy is now not a function of velocity, ...i.e. not a function of the total velocity in the [imath]x,y,z,\tau[/imath]-space...? Are you saying that because, for instance, an object is considered to gain (kinetic) energy when it gains velocity in the [imath]x,y,z[/imath] directions, while its total velocity remains unchanged? we have also made the original formation of his principal of action into an unusable procedure (his relationship related velocities and we now have no relationship to minimize); ...since velocities never change in the [imath]x,y,z,\tau[/imath]-space...? however, there is another attack (actually the same attack but somewhat subtly different). If gravity is to be a mere pseudo force, we can use the fact that our model must reproduce the exactly the same classical pseudo forces produced by Newton mechanics. This is true as these forces are no more than a direct consequence of expressing the path in a non-inertial frame or, in my case, a reference frame not at rest with respect to the universe. Yup. It is interesting to look at centrifugal force as a well understood Newtonian pseudo force. From the perspective of an observer at the center of the rotation, a string exerting a force equal to the centrifugal force will appear to maintain the object under the influence of that pseudo force at rest: i.e., a rock at the end of a string swinging in a circle will appear to be at rest in a reference frame rotating with that rock. We can then see the object as a test probe into the force field describing that specific pseudo force. Since my total interest is in explaining gravity as a pseudo force, I want to examine this force as seen from the perspective of being m times the negative gradient of a gravitational potential (which is the typical way of defining a gravitational potential). In this case [imath]\vec{F}=-m\vec{\nabla}\Phi [/imath] where [imath]\Phi(\vec{x})[/imath] is the gravitational potential. Had a little adventure in Wikipedia about the definition of gravitational potential etc, and yes that seems to make perfect sense to me. I gathered that the "gradient of a gravitational potential" is essentially the radial derivative of the function describing gravitational potential... I.e. the "slope" of a graph describing that function. Also I gathered that the definition of gravitational potential is the "potential energy" PER unit mass, so that's why you are including the "m" there in your equation (I was getting a bit confused at first). So in other words, [imath]\vec{F}=-m\vec{\nabla}\Phi [/imath] refers to the strength of the "fictional force" that affects an object with given mass in a given location in a gravitational field. (Just thought I'll say that out loud to benefit other people who are starting with almsot zero physics knowledge, like me ) Any freshman physics text will provide an excellent derivation of centrifugal force. The result is the quite simple form, [imath]\;\vec{F}=m\omega^2\hat{r}\;[/imath] where omega is the angular velocity and [imath]\hat{r}[/imath] is a unit vector in the radial direction. I'll take that on faith for now. It follows that the analogous gravitational representation implies that the required [imath]\Phi(\vec{x})[/imath] (which, by the way, must by symmetry be a radial function) must obey the relationship $-\frac{\partial}{\partial r}\Phi(r)=r\omega^2$. I.e. the radial derivative, or the "slope" of the gravitational potential, is related to the angular velocity somehow? Hmmm, you must be referring to a situation where some object is orbiting the center of the gravitational field, so then its angular velocity and the distance from the center are related to the strength of the associated "fictional force", which is keeping it in stable orbit. Intuitively, that makes sense, but I am not entirely sure how you got that exact expression (specifically, the "r" on the right side). I mean, I understand that [imath]\vec{\nabla}\Phi = \frac{\partial}{\partial r}\Phi(r)[/imath] since its a radial function, and I guess we are essentially associating gravitational force with a centrifugal force capable of negating it...? $-m\frac{\partial}{\partial r}\Phi(r) = m\omega^2\hat{r}$ ...I guess... ? And you have removed the "m" from both sides, but I am not sure how the "r" finds its way to the right side the way it did. I mean, I understand the associated fictional force is a function of the radius, but how do we know the [imath]\omega^2[/imath] is simply multiplied by the radius? I'll pause here as I am getting the feeling I may well be interpreting you wrong already... -Anssi ### #28 Doctordick Doctordick Explaining • Members • 1043 posts Posted 02 May 2010 - 12:35 AM Hi Anssi, I am sorry I hadn't let you know about the post. There is another post you should be aware of, I am speaking of “Laying out the representation to be solved”. I would appreciate any comments you might wish to make on the clarity of that post. I don't think you will have any problems understanding what I said, but, if you do, we can try and clear them up. Meanwhile, I will attempt to clarify this thread. It would be very nice if the discussion about the underlying issues would be held here; Simply because it will make it easier to follow and backtrack this thread if the responses are more strictly about the derivation of general relativity. I wouldn't argue with you but I suspect you will not find many takers. There are a lot of people here who's sole purpose is to create confusion by making it difficult to backtrack threads. Rade long ago made his intentions quite clear and at the moment I am seriously suspicious of lawcat. I am considering placing lawcat on my ignore list. If he keeps up the kind of comments he has been making I will do so. I didn't see the “ignore list” as a proper response to ignorance but I am beginning to realize that intentional ignorance is a bothersome issue which should be ignored. Since you mention momentum quantization along the tau axis, I stopped to think about that a bit. I am not sure why was it quantized instead of being a continuous variable. It is a direct consequence of the manner and purpose with which tau was introduced. That issue will show up (hopefully more clearly) in my further posts regarding my proof of my fundamental equation. For the moment, let's not worry about it as discussion at this point will probably generate nothing but further confusion. Just take it as a proved aspect of tau. Also, I suspect this issue related to the use of dirac constant in the definition of mass operator [imath]-i\frac{\hbar}{c}\frac{\partial}{\partial \tau}[/imath]...? That issue you seem to have somewhat backwards; however, again I suggest you not worry about it for the moment. It will be clarified in the actual proof which I am currently trying to restate. Yup, thank you about that whole explanation. It might be interesting to take a quick look at that Maupertuis' proof but I did not find it... Nevertheless, I have pretty good suspicion about how the inclusion of time axis and its relativistic transformation can get around proofs that rely on newtonian definition of time. As I said, Maupertuis' proof has to do with the fact that objects with different velocities follow different paths in a gravitational field. This is totally counter to the proposition that “pseudo forces” create paths which are mirror images of the path of the non-inertial frame. That idea means that the path certainly can not depend upon the velocity of the object; not in a Euclidean geometry anyway. i.e. whatever velocity they are "missing" in the [imath]x,y,z[/imath]-axes, is to be attributed to their velocity along the [imath]\tau[/imath]-axis... Absolutely correct! Hmmm, well not very obvious to me, as I don't know how his proof worked exactly... Well, if everything in the universe is traveling through my four dimensional Euclidean geometry at exactly the same velocity, then velocity differences don't exist and the actual paths can once more be mirror images of the path of the non-inertial frame. Path dependence on the velocity of the objects vanishes absolutely. Are you saying that because, for instance, an object is considered to gain (kinetic) energy when it gains velocity in the [imath]x,y,z[/imath] directions, while its total velocity remains unchanged? Not really. The issue here is that nothing can depend upon the velocity if the velocity of every object in the universe is the same. ...since velocities never change in the [imath]x,y,z,\tau[/imath]-space...? Exactly! I gathered that the "gradient of a gravitational potential" is essentially the radial derivative of the function describing gravitational potential... I.e. the "slope" of a graph describing that function. That is correct. So in other words, [imath]\vec{F}=-m\vec{\nabla}\Phi [/imath] refers to the strength of the "fictional force" that affects an object with given mass in a given location in a gravitational field. (Just thought I'll say that out loud to benefit other people who are starting with almsot zero physics knowledge, like me ) I think you have that pretty straight. I.e. the radial derivative, or the "slope" of the gravitational potential, is related to the angular velocity somehow? As I stated, centrifugal force is given by $\vec{F}=mr \omega^2\hat{r}$ (Ah, Anssi you have once again caught an error in a post of mine. I omitted that “r” in there. I have edited the original post to correct the error. I thank you once again. You are clearly the only person who reads my stuff carefully.) Sorry you took it on faith to be correct. You shouldn't do that with my stuff; I do certainly make errors. At any rate, the force due to the gravitational potential is given by [imath]\vec{F}=-m\vec{\nabla}\Phi[/imath]. Setting these two forces to be identical (together with the fact that [imath]\vec{\nabla}\Phi=\frac{\partial}{\partial r}\Phi(r) \hat{r}[/imath] because there is no change in the potential outside a change in r) and, setting these two forces to be identical one has immediately the fact that $r\omega^2=-\frac{\partial}{\partial r}\Phi(r)$ which I am sure you would have accepted had it not been for my gross error. Hmmm, you must be referring to a situation where some object is orbiting the center of the gravitational field, so then its angular velocity and the distance from the center are related to the strength of the associated "fictional force", which is keeping it in stable orbit. No, I am talking about an object restrained by a string to the center of the coordinate system. The coordinate system is not an inertial system (or, in my case, at rest with respect to the universe) but is rather rotating with an angular velocity of omega. Because I do not know the coordinate system is rotating, I presume the force is caused by a gravitational potential. Thus, what I am calculating is the consequence of the rotation and then attributing it to gravitational effects; essentially casting the supposed gravitational force as a "pseudo" force. Intuitively, that makes sense, but I am not entirely sure how you got that exact expression (specifically, the "r" on the right side). Yeah, I omitted that r in my specification of the centrifugal force. I'll pause here as I am getting the feeling I may well be interpreting you wrong already... Nope, you are doing beautifully. As per usual, you just caught me in a serious error and I thank you for that. Know that I appreciate you beyond belief. Thanks -- Dick ### #29 AnssiH AnssiH Understanding • Members • 773 posts Posted 02 May 2010 - 08:52 AM Hi Anssi, I am sorry I hadn't let you know about the post. There is another post you should be aware of, I am speaking of “Laying out the representation to be solved”. I would appreciate any comments you might wish to make on the clarity of that post. I don't think you will have any problems understanding what I said, but, if you do, we can try and clear them up. Okay, yeah, I saw it yesterday, but didn't read it through yet. As I said, Maupertuis' proof has to do with the fact that objects with different velocities follow different paths in a gravitational field. This is totally counter to the proposition that “pseudo forces” create paths which are mirror images of the path of the non-inertial frame. That idea means that the path certainly can not depend upon the velocity of the object; not in a Euclidean geometry anyway. Hmm, there's something I'm definitely missing here... :I I suppose you are referring to, for instance, a cannon ball being shot at different velocities, and consequently following different trajectories... I'm just thinking that, since only the rate of descent would be attributed to the fictional force, the different trajectories shouldn't be a problem, at least not very obviously so. I'm also thinking that with a coriolis effect the path obviously depends on the velocity of the object too, i.e. the final path is not exactly a mirror image of the path of the non-inertial frame... Hmmm, unfortunately I just can't find any information about that Maupertuis' proof, I don't know why is it so hard to find. I'm just finding all sort of material about his expidition to lapland Well, if everything in the universe is traveling through my four dimensional Euclidean geometry at exactly the same velocity, then velocity differences don't exist and the actual paths can once more be mirror images of the path of the non-inertial frame. Path dependence on the velocity of the objects vanishes absolutely. No comments, as I don't yet even understand the problem exposed by Maupertuis... (Ah, Anssi you have once again caught an error in a post of mine. I omitted that “r” in there. I have edited the original post to correct the error... Oh okay, now [imath]r\omega^2=-\frac{\partial}{\partial r}\Phi(r)[/imath] makes sense to me I guess I should have expected to see an "r" somewhere in there, but I had already absorbed too much information at one sitting, and was starting to feel cloudy No, I am talking about an object restrained by a string to the center of the coordinate system. The coordinate system is not an inertial system (or, in my case, at rest with respect to the universe) but is rather rotating with an angular velocity of omega. Because I do not know the coordinate system is rotating, I presume the force is caused by a gravitational potential. Thus, what I am calculating is the consequence of the rotation and then attributing it to gravitational effects; essentially casting the supposed gravitational force as a "pseudo" force. Yup. Back to OP; It follows that the analogous gravitational representation implies that the required [imath]\Phi(\vec{x})[/imath] (which, by the way, must by symmetry be a radial function) must obey the relationship $-\frac{\partial}{\partial r}\Phi(r)=r\omega^2$. This fact directly implies that [imath]-\Phi=\frac{1}{2}(r\omega)^2=\frac{1}{2}|\vec{v}|^2[/imath] At first I did not realize at all how you got to that expression (so unfamiliar with math), but after toying around with wolfram alpha a bit, I realized that [imath]\frac{\partial}{\partial r}\frac{1}{2}(r\omega)^2 = r\omega^2[/imath], so that's why you are saying [imath]-\Phi=\frac{1}{2}(r\omega)^2[/imath]. Then, regarding [imath]\frac{1}{2}(r\omega)^2=\frac{1}{2}|\vec{v}|^2[/imath] we are in unbelievable luck; my eyes just accidentally landed on a wikipedia text mentioning that the alternative to [imath]F = m\omega^2r[/imath] is [imath]F = \frac{mv^2}{r}[/imath] So that implies; $\frac{v^2}{r} = r\omega^2$ i.e. $v^2 = (r\omega)^2$ Hence; $\frac{1}{2}(r\omega)^2=\frac{1}{2}|\vec{v}|^2$ And even I can't believe I figured that out, like I said, complete accident since I don't know the derivation of the centrifugal force hehe Anyway, looks valid to me. or, multiplying by [imath]\frac{2}{c^2}[/imath], one can conclude that $\frac{2}{c^2}\Phi(r)=-\left(\frac{|\vec{v}|}{c}\right)^2$. Yup. In other words, the gravitational potential (as seen in the frame where the object being observed appears to be at rest) seems to be directly related to the actual velocity as seen from the correct frame (the frame at rest with the universe). Wait a minute... About the "velocity as seen from the correct frame"; are we now talking about a situation where an object is orbiting the center of a gravitational field? I.e, the frame where the object being observed appears to be rest, is essentially a rotating frame? Or a situation where an object is basically just sitting on the ground? Or both? This result is very interesting. As the observed object is actually moving in the correct frame, we should expect a clock (or any temporal physical process moving with that object) to proceed in accordance with special relativity. This implies that the correct relativistic transformation of the instantaneous time differentials should be given by $dt'=dt\sqrt{1-\left(\frac{|\vec{v}|}{c}\right)^2}\equiv dt\sqrt{1+\frac{2\Phi}{c^2}}$ Indeed. I realize that is essentially the standard lorentz factor, but for the benefit of lurkers, in this analysis that relationship was derived in the thread about special relativity, and the same conclusion can be seen in the end of the little animation related to the analysis: YouTube - Presentation of a moving clock $Cycles_m = Cycles_r \sqrt{1 - sin^2(\theta)}$ I.e, if you backtrack from there a bit; $Cycles_m = Cycles_r \sqrt{1 - \left (\frac{v}{c} \right )^2}$ which happens to be exactly the standard gravitational red shift. Looked at the wikipedia page for gravitational red shift and gravitational time dilation, and yes looks very familiar, albeit I did not find that exact expression. This implies that any geometry which yields gravity as a pseudo force must also yield the standard gravitational red shift... . . . ......i.e., instead of seeing the speed of light as slower in a gravitational field we could just as well see the speed as unchanged and the distances as increased. After all, once time is defined, distances are reckoned via the speed of light. Though that satisfies the original goal expressed above, the idea of refraction (the speed of light being slowed in a gravitational field) is a much simpler expression of the solution. It is certainly most convenient method of finding the proper geodesics. In fact, there is a very simple view of the situation which will yield exactly that result. Got all the way to that paragraph and did not have problems with it. I'll have a rest here again. -Anssi ### #30 modest modest Creating • Members • 4959 posts Posted 02 May 2010 - 09:15 AM $dt'=dt\sqrt{1-\left(\frac{|\vec{v}|}{c}\right)^2}\equiv dt\sqrt{1+\frac{2\Phi}{c^2}}$ which happens to be exactly the standard gravitational red shift. Looked at the wikipedia page for gravitational red shift and gravitational time dilation, and yes looks very familiar, albeit I did not find that exact expression. Substitute $\Phi = -GM/r$ (from the definition of gravitational potential) and that's probably how wiki has it expressed. It is sometimes expressed as a function of potential in the literature. Eq. 4.4: Eq. 17.18: ~modest ### #31 Doctordick Doctordick Explaining • Members • 1043 posts Posted 02 May 2010 - 08:56 PM Hmm, there's something I'm definitely missing here... :I I suppose you are referring to, for instance, a cannon ball being shot at different velocities, and consequently following different trajectories... I'm just thinking that, since only the rate of descent would be attributed to the fictional force, the different trajectories shouldn't be a problem, at least not very obviously so. I'm also thinking that with a coriolis effect the path obviously depends on the velocity of the object too, i.e. the final path is not exactly a mirror image of the path of the non-inertial frame... Hmmm, unfortunately I just can't find any information about that Maupertuis' proof, I don't know why is it so hard to find. I'm just finding all sort of material about his expidition to lapland Sorry about that. I also googled Maupertuis and found no reference whatsoever to the proof. The central problem here is that I studied physics so long ago that I suspect many things were given quite different spins back then. It could also be that my approach might be colored by idiosyncrasies of of individual professors. Whatever the cause, I have been out of contact with the academy for many many years. I seem to remember Maupertuis getting credit back then. In my original paper, I had referenced that issue to page 6 of the 1965 edition of “Introduction to General Relativity” by Adler, Bazin and Schiffer. The following is an exact quote of what is presented on page 6 of the 1965 edition. Why has Einstein's idea of geometrizing the gravitational field of force not been conceived before? To answer this question let us look at the most geometrical of all variational principles of mechanics, namely, the principle of Maupertuis. In its simplest form it states the following: Let a particle move in a field of force with the potential V(x,y,z). If it travels from a point [imath]P_1[/imath] to a point [imath]P_2[/imath] with the varying velocity v, its trajectory is that actual curve which yields a stationary value for the action integral [imath]\int^{P_2}_{P_1}vds[/imath] among all paths connecting [imath]P_1[/imath] and [imath]P_2[/imath] which can be run through with the same constant energy [imath]E=\frac{1}{2}mv^2 +V[/imath] of the particle. We may express this principal in the obvious variational formula $\delta\int^{P_2}_{P_1}\left(\frac{2}{m}(E-V)\right)ds=0$ In the case of V=0, we obtain the rectilinear motion asserted by the law of inertia. In the case of a nonvanishing potential V(x,y,z), we can introduce a metric based on the line element $dl^2=\frac{2}{m}[E-V(x,y,z)](dx^2_1+dx^2_2+dx^2_3)$ and formulate the trajectory condition as $\delta\int^{P_2}_{P_1}dl=0$ In the new differential geometry with this line element dl, the trajectory would indeed be a geodesic. But observe that, for different particles in the same field and with different energies E, the geometry would have to be a different one, which is impossible. This fact precluded a geometrization of dynamics. We can see the same difficulty from the following consideration. Suppose that the gravitational field of the sun creates a non-Euclidean geometry and that the planets have to move along the geodesics of this geometry. It is well known that, if we prescribe a point in space and a direction through this point, there exists exactly one geodesic passing through the point with the prescribed direction. On the other hand, two particles in a gravitational field fired from the same point in the same direction will move along the same trajectory only if their initial velocities are equal. Thus only one projectile could at most follow the corresponding geodesic. Indeed geometry deals with the space variables and directions, but velocity is a concept involving time, and it is the initial velocity which enters into the determination of a trajectory. In the theory of special relativity Einstein had shown that space and time variables are inextricably connected and transform among each other under Lorentz transformations. A reduction of gravitational theory to geodesic motion in an appropriate geometry could be carried out only in the four-dimensional space-time continuum of relativity theory. That this is indeed possible is the main thesis of this book. That a reduction of the theory of gravitation to geometry was hardly possible before the special theory of relativity should be clear from the preceding considerations. Note that the whole issue boils down to “different velocities” for different objects. Note that with regard to the centrifugal and Coriolis forces, there is a simple Euclidean transformation which yields a straight line path for a free particle: i.e., the transformation to a non-rotating system. The issue with gravity is that they were searching for a transformation to a non-Euclidean system. Since the orbits return upon themselves the geometry simply can not be Euclidean because, in a Euclidean system, straight lines can not yield closed paths. In other words to find a rational result, they had to examine much more complex systems. And even I can't believe I figured that out, like I said, complete accident since I don't know the derivation of the centrifugal force hehe I think you are beginning to learn some mathematics. Regarding centrifugal force the issue is actually quite simple. Most people measure angles in degrees. Physicists tend to measure angles in “radians” (it makes a lot of angular mathematics easy). By definition, instead of 360 degrees in a circle, there are [imath]2\pi[/imath] radians. That makes arc lengths easy to specify. If the angle [imath]\theta[/imath] is measured in radians the arc length is just [imath]r\theta[/imath]. Angular velocity is generally represented by [imath]\frac{d\theta}{dt}=\omega[/imath]. Thus the velocity along the circle of that rock out there on the end of the string is given by [imath]r\omega[/imath]. Note that the speed of the rock does not change; what changes is its direction. The change in velocity is always perpendicular to the path; it is towards the center, the direction the string is pulling. So the change in velocity (the acceleration) is the component of its velocity parallel to the string after time dt. Well the distance the rock has moved along the path is [imath]vdt=r\omega dt[/imath] so the change in velocity in the time dt is [imath]dv=(vdt)d\theta=r\omega d\theta[/imath] (draw a picture, the two triangles, radius against distance moved and velocity against velocity change, involved here are similar). Divide that by dt and one has the acceleration [imath]\frac{dv}{dt}=r\omega \frac{d\theta}{dt}= r\omega^2[/imath]. Wait a minute... About the "velocity as seen from the correct frame"; are we now talking about a situation where an object is orbiting the center of a gravitational field? No, I am still talking about the centrifugal force, the rock on a string. Seen from the rotating frame (where it is at rest) there is an apparent force holding it out there against the string. I am just calling that force a “gravitational force” because, looking at it from my rotating frame, I have no idea what is causing it to pull on that string. The “correct frame” is the one which is not rotating. Looked at the wikipedia page for gravitational red shift and gravitational time dilation, and yes looks very familiar, albeit I did not find that exact expression. I will plead senility on that one. I don't know exactly where I got the expression (that was a lot of years ago). I will go with modest on the validity of the expression; see his second link (equation 17.19). Got all the way to that paragraph and did not have problems with it. I'll have a rest here again. Thank you very much for your efforts. I won't post any more of my proof of the fundamental equation until you have proof read the “laying out the representation” post. Have fun -- Dick ### #32 AnssiH AnssiH Understanding • Members • 773 posts Posted 04 May 2010 - 02:05 PM Just a short reply for now... In my original paper, I had referenced that issue to page 6 of the 1965 edition of “Introduction to General Relativity” by Adler, Bazin and Schiffer. The following is an exact quote of what is presented on page 6 of the 1965 edition. Note that the whole issue boils down to “different velocities” for different objects. Okay, I didn't understand all of that, but I have a faint idea of it having to do with getting inconsistent energies, and about relativistic time relationships adjusting them correctly... I think. Note that with regard to the centrifugal and Coriolis forces, there is a simple Euclidean transformation which yields a straight line path for a free particle: i.e., the transformation to a non-rotating system. The issue with gravity is that they were searching for a transformation to a non-Euclidean system. Since the orbits return upon themselves the geometry simply can not be Euclidean because, in a Euclidean system, straight lines can not yield closed paths. In other words to find a rational result, they had to examine much more complex systems. Right, okay. Regarding centrifugal force the issue is actually quite simple. Most people measure angles in degrees. Physicists tend to measure angles in “radians” (it makes a lot of angular mathematics easy). By definition, instead of 360 degrees in a circle, there are [imath]2\pi[/imath] radians. That makes arc lengths easy to specify. If the angle [imath]\theta[/imath] is measured in radians the arc length is just [imath]r\theta[/imath]. Angular velocity is generally represented by [imath]\frac{d\theta}{dt}=\omega[/imath]. Thus the velocity along the circle of that rock out there on the end of the string is given by [imath]r\omega[/imath]. Note that the speed of the rock does not change; what changes is its direction. The change in velocity is always perpendicular to the path; it is towards the center, the direction the string is pulling. So the change in velocity (the acceleration) is the component of its velocity parallel to the string after time dt. Well the distance the rock has moved along the path is [imath]vdt=r\omega dt[/imath] so the change in velocity in the time dt is [imath]dv=(vdt)d\theta=r\omega d\theta[/imath] (draw a picture, the two triangles, radius against distance moved and velocity against velocity change, involved here are similar). Divide that by dt and one has the acceleration [imath]\frac{dv}{dt}=r\omega \frac{d\theta}{dt}= r\omega^2[/imath]. Ah, I see, pretty clever No, I am still talking about the centrifugal force, the rock on a string. Seen from the rotating frame (where it is at rest) there is an apparent force holding it out there against the string. I am just calling that force a “gravitational force” because, looking at it from my rotating frame, I have no idea what is causing it to pull on that string. The “correct frame” is the one which is not rotating. Right, okay. I will plead senility on that one. I don't know exactly where I got the expression (that was a lot of years ago). I will go with modest on the validity of the expression; see his second link (equation 17.19). Yes, looks like there it stands in exactly the same form. Thank you for digging that up Modest. Thank you very much for your efforts. I won't post any more of my proof of the fundamental equation until you have proof read the “laying out the representation” post. Yup, I'll take a look at it. -Anssi Questioning • Members • 180 posts Posted 09 May 2010 - 12:00 PM That idea together with differential calculus created an extremely powerful mathematical method of predicting the dynamic behavior of objects (an object being any suitable stable defined collection of information). Since acceleration is the time derivative of velocity (where velocity is the time derivative of position in that “inertial frame”) force can be thought of as the time derivative of momentum (momentum being given by [imath]m\vec{v}[/imath]). If m is a constant, the expression [imath]\vec{F}=\frac{d}{dt}(m \vec{v})[/imath] is identical to [imath]\vec{F}=m\vec{a}[/imath] and if m is allowed to change, that fact simply allows a rather simple mechanism to handle cases where identical forces cause different accelerations. As a consequence, m ends up being little more than a parameter allowing a more versatile definition of force. So here we are using the equation [imath]\vec{F}=m\vec{a}[/imath] as the definition of force in any inertial frame. That is, any frame that is not accelerating, or equivalently one that has no force acting on it, which is sort of a circular definition so maybe we should avoid it. But doesn’t this bring up the issue that different frames will not agree on the force on a object as they may not agree on length or mass so they wont agree on the acceleration of an object? Also, what about the possibility that the mass will be a function of t or is it also required by our choice of using an inertial reference frame that the mass is a constant? That brings up the interesting question, “what does the dynamic behavior look if one is using a non-inertial frame”. Clearly, a non-inertial frame is a frame which is accelerating relative to an inertial frame. It should be clear to the reader that any object at rest in any inertial frame (which then, by definition, has no forces accelerating it) will appear to be accelerating if its position is represented by coordinates in a non-inertial frame representation. It should also be quite clear that it is not really accelerating at all, it is only the reference frame which is actually changing (accelerating). The apparent motion of any force free physical object who's position is being represented via the non-inertial frame will be an exact mirror image of that frames acceleration. So is there a way to tell what frame we are in, is it as simple as saying that in a non-inertial reference frame we are experiencing a force and the transformation you are talking about is a purely mathematical means of making the force on a object vanish in our explanation. That is, we are now interested in finding a reference frames where if an object is experiencing a force it no longer has a force acting on it in the new coordinate system. And the rest of the universe is now experiencing a force in the opposite direction that results in the same acceleration. $dt'=dt\sqrt{1-\left(\frac{|\vec{v}|}{c}\right)^2}\equiv dt\sqrt{1+\frac{2\Phi}{c^2}}$ which happens to be exactly the standard gravitational red shift. This implies that any geometry which yields gravity as a pseudo force must also yield the standard gravitational red shift; or, alternately, gravitational red shift is not really a valid test of Einstein's general theory of relativity. This really isn't very enlightening as the gravitational red shift can be shown to be required by conservation of energy, but it does nonetheless imply that the above analysis is valid. That is, the only thing that a different force will imply is the actual form and value of [imath]\Phi[/imath] which is the equivalent velocity of an object. That is, it is the velocity that the object will have in a reference frame where the force on the object vanishes that will make the Lorenz transformation the correct transformation. But I don’t understand how this results in red shift of a object as it looks like all that you have done is calculate the change of a clock in an accelerating reference frame (the left side of the equation) in comparison to what a clock in an inertial reference frame will measure (the right side of the equation). More importantly, the above suggests an attack towards determining the geometry which will yield gravity as a pseudo force in our four dimensional Euclidean geometry. I have already shown how static structures appear as three dimensional objects in this geometry so let us examine what is commonly called “a gravitational well”. The gravitational well consists of a vertical hole where there is a gravitational field in the vertical direction. If an experimenter in a gravitational well sets up a clock via a light pulse traveling back and forth between two horizontally displaced mirrors, since we can establish horizontal measure (simple vertical lines carry those measures to different heights in the hole) and his clock must run slow, we must see the apparent velocity of light to be $c'=c\sqrt{1+\frac{2\Phi}{c^2}}$ So do we conclude that the passage of the clock in the gravitational well is running slower because just like in your example of centrifugal force the object must act as though it is moving in comparison to a inertial reference frame. That is, the Lorenz transformation is valid we just have to solve for the required function to substitute in for the velocity of the object in it’s inertial frame. Won’t this result though, in the conclusion that the length of his clock is longer then our clock rather then a faster speed of light since we have defined length and speed to be interdependent of each other and so it would make sense to measure his clock by means of how long light takes to move from one of the mirrors to the other. As a result we would conclude that the lines that are extended up from the mirrors are not straight but are bent lines. Or is this the very problem with general reactivity, it assumes that the speed of light is constant and as a result must generate a geometry where the lines are bent? And since the geometry that you are using is a Euclidean geometry those lines must be straight lines so we have no way to conclude that the speed of light is the same in a gravity well and must in fact conclude that it is slower. But if this is the case what will someone in a gravity well see when looking at a clock that is not in the gravity well? Will they conclude that our clock is running faster then one not in the gravity well. Also, I don’t understand how you arrived at the above equation but I suspect that it is an issue of velocity rather then length being scaled by the Lorenz transformation. ### #34 AnssiH AnssiH Understanding • Members • 773 posts Posted 12 May 2010 - 05:07 PM With regard to the issue of refraction, my fundamental equation is a wave equation with Dirac delta function interactions. Clearly, in the absence of interactions, the probability wave representing an event will proceed at a fixed velocity. Any specific delta function interaction can be seen as an impact changing the direction and energy of that probability wave. What is important here is the fact that interaction will depend upon the distance between the two elements connected by that delta function interaction: i.e., the hypothetical element (which must be a boson) must carry the momentum and energy being transfered and the transfer must be consistent with the Heisenberg uncertainty principal: i.e., the uncertainty in momentum is directly related to the uncertainty in position. This implies that the further apart the interacting fermions are, the less momentum transfered must be (see virtual particle exchange). I'm struggling here a bit. First I am not exactly sure what do you mean by a probability wave representing an event. I was aligned to think of this in terms of probability waves representing the positions of defined objects. Do you just see them as essentially the same thing, or do you refer to something like a probability wave representing something like a collision between virtual particle and a fermion? Anyway, I read that page about the virtual particle exchange, did not understand everything about it, but I picked up some idea of what you are talking about. I did not pick up though, why the Heisenberg uncertainty principle implies that the further apart the interaction fermions are, the less momentum transfer must be... I guess it has got something to do with how the wave functions interfere, but I am completely unfamiliar with that subject and didn't manage to figure it out... Any physical object (any structure stable enough to be thought of as an object) must have internal forces maintaining that structure. Any interaction with another distant object must be via the virtual particle exchange I just commented about. Thus it is that one would expect the fundamental element of that physical object interacting via that delta function would have its momentum altered, not the whole object; however, that alteration would create a discrepancy in the structure of the object under discussion. Since that object must have internal forces maintaining its structure, it is to be expected that those internal interactions (which are also mediated by delta function exchange forces) will bring the trajectory that interacting fundamental element essentially back to its original path (at least on average). Thus it is that the path of that fundamental element can be seen as crooked as compared to its path in the absence of that distant object. Of issue is the fact that, if the influence of the distant object is ignored, the influenced element will inexplicitly appear to be proceeding at a slightly slower velocity than it would if the distant object didn't exist. What is important here is that this effect decreases as the distance from the distant object increases. That means that the net effect is to yield a very slight change in the speed of the elements which make up that object as one moves across the object. The net effect of such an interaction is to refract the wave function of the object under examination. Right I see, the point is that those sub-elements of the object that are closer to the distant object, are slowed down slightly more than those sub-elements that are further away, so the total average path would be expected to curve... I understand the analogy to refraction now. So I suppose when you refer to the speed of an element, you must be referring to its speed in [imath]x,y,z,\tau[/imath]-space. I.e. it is because every element must move at the same fixed speed, that more wiggling means slower speed towards the average direction of the entire object. If the distant object and the object under observation are not moving with respect to one another (they are moving parallel to one another in the tau direction), the net effect of that refraction is to curve the paths of the two objects towards one another: i.e., there will be an apparent attraction between them. Yup. It is also evident that, since the mass of the source object (the source of these bosons external to the object of interest) is proportional to the total momentum of that object, one should expect the apparent density (as seen from the object of interest) should be proportional to its mass: i.e., one should expect the exchange forces to be proportional to mass. Right... When you say "total momentum", you are essentially referring to the number and density of the individual elements moving along [imath]\tau[/imath] (at fixed speed). Clearly the interaction just discussed arises from differential effects in the basic interactions thus it will amount to a force considerably less than the underlying force standing behind that differential effect. Thus it is that the two forces I have already discussed (the forces due to massless boson exchange: shown to yield electromagnetic effects and the forces due to massive boson exchange: shown to yield fundamental nuclear forces) will end up being split into four forces. I may be forgetting something, but where and when the massive boson exchange/nuclear forces were brought up...? Differential effects will yield a correction to both basic forces which correspond quite well with the forces observed in nature. The differential effect on massless boson exchange yields what appears to be a very weak gravitational force (weak when compared to the underlying electromagnetic effects) and the differential effect on massive boson exchange yields what appears to be a very weak nuclear force (weak compared to the underlying nuclear force). Right, that makes perfect sense to me, if there's a force mediated by boson exchange, there must be a corresponding differential effect via refraction, due to the fixed speed of the elements... What is interesting is that the “weak nuclear force” can be shown to violate parity symmetry whereas the “weak electrical force” (gravity) does not. This is a direct consequence of the fact that the nuclear exchange bosons are massive. That is very interesting... Another very unexpected feature of modern physics turning out to be a very expected feature of the symmetry requirements. Should probably discuss that issue little bit at some point as well. Apart from this analysis, does any satisfying explanation exist as to why weak nuclear force violates parity symmetry? I'll have a pause here again, but first I still have one more comment. When I'm thinking about the differential effect caused by boson exchange forces, I can't help but think of the unexplained behaviour of foucault pendulum during a solar eclipse. Decrypting the Eclipse - NASA Science Current theories of gravity don't explain the Allais effect, and AFAIK the best attempt to explain it so far is that the shadow of the moon causes air to cool down and its density to increase, which causes a gravitational effect. That apparently causes too small gravitational effect to actually explain the Allais effect though (I don't have the skills to work it out, but certainly that explanation sounds dubious at best). At any rate, someone more skilled than me could probably work out whether this paradigm explains Allais effect directly as a gravitational effect. I guess the moon should be expected to affect the virtual particle exchange between the sun and the pendulum one way or another. I can't work out what sort of interference one might expect between all the relevant bodies, but I'm just thinking if this explains the Allais effect as a gravitational effect, that would be remarkable. And another first. -Anssi
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8659507632255554, "perplexity": 475.00725352375764}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375093974.67/warc/CC-MAIN-20150627031813-00138-ip-10-179-60-89.ec2.internal.warc.gz"}
https://hackage.haskell.org/package/monoid-extras-0.5/docs/Data-Monoid-Coproduct.html
monoid-extras-0.5: Various extra monoid-related definitions and utilities Data.Monoid.Coproduct Description The coproduct of two monoids. Synopsis # Documentation data m :+: n Source # m :+: n is the coproduct of monoids m and n. Values of type m :+: n consist of alternating lists of m and n values. The empty list is the identity, and composition is list concatenation, with appropriate combining of adjacent elements when possible. Instances (Show m, Show n) => Show ((:+:) m n) Source # MethodsshowsPrec :: Int -> (m :+: n) -> ShowS #show :: (m :+: n) -> String #showList :: [m :+: n] -> ShowS # Semigroup ((:+:) m n) Source # Methods(<>) :: (m :+: n) -> (m :+: n) -> m :+: n #sconcat :: NonEmpty (m :+: n) -> m :+: n #stimes :: Integral b => b -> (m :+: n) -> m :+: n # Monoid ((:+:) m n) Source # The coproduct of two monoids is itself a monoid. Methodsmempty :: m :+: n #mappend :: (m :+: n) -> (m :+: n) -> m :+: n #mconcat :: [m :+: n] -> m :+: n # (Action m r, Action n r) => Action ((:+:) m n) r Source # Coproducts act on other things by having each of the components act individually. Methodsact :: (m :+: n) -> r -> r Source # inL :: m -> m :+: n Source # Injection from the left monoid into a coproduct. inR :: n -> m :+: n Source # Injection from the right monoid into a coproduct. mappendL :: m -> (m :+: n) -> m :+: n Source # Prepend a value from the left monoid. mappendR :: n -> (m :+: n) -> m :+: n Source # Prepend a value from the right monoid. killL :: Monoid n => (m :+: n) -> n Source # killL takes a value in a coproduct monoid and sends all the values from the left monoid to the identity. killR :: Monoid m => (m :+: n) -> m Source # killR takes a value in a coproduct monoid and sends all the values from the right monoid to the identity. untangle :: (Action m n, Monoid m, Monoid n) => (m :+: n) -> (m, n) Source # Take a value from a coproduct monoid where the left monoid has an action on the right, and "untangle" it into a pair of values. In particular, m1 <> n1 <> m2 <> n2 <> m3 <> n3 <> ... is sent to (m1 <> m2 <> m3 <> ..., (act m1 n1) <> (act (m1 <> m2) n2) <> (act (m1 <> m2 <> m3) n3) <> ...) That is, before combining n values, every n value is acted on by all the m values to its left.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6885702610015869, "perplexity": 13475.704492046778}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578616424.69/warc/CC-MAIN-20190423234808-20190424020808-00421.warc.gz"}
http://dml.cz/handle/10338.dmlcz/134182
# Article Full entry | PDF   (0.3 MB) Keywords: second order linear difference equation; symplectic system; phase; oscillation; nonoscillation; trigonometric transformation Summary: References: [1] C. D. Ahlbrandt, A. C. Peterson: Discrete Hamiltonian Systems. Difference Equations, Continued Fractions, and Riccati Equations. Kluwer Academic Publ., Boston, 1996. MR 1423802 [2] M. Bohner, O. Došlý: Disconjugacy and transformations for symplectic systems. Rocky Mountain J. Math. 27 (1997), 707–743. MR 1490271 [3] M. Bohner, O. Došlý: Trigonometric transformations of symplectic difference systems. J. Differential Equations 163 (2000), 113–129. MR 1755071 [4] M. Bohner, O. Došlý, W. Kratz: A Sturmian theorem for recessive solutions of linear Hamiltonian difference systems. Applied Math. Letters 12 (1999), 101–106. MR 1749755 [5] O. Borůvka: Lineare Differentialtransformationen 2. Ordnung. Hochschulbücher für Mathematik. Band 67. VEB, Berlin, 1967; Linear Differential Transformations of the Second Order, The English Univ. Press, London, 1971. MR 0236448 [6] O. Došlý: Phase matrix of linear differential systems. Čas. Pěst. Mat. 110 (1985), 183–192. MR 0796568 [7] O. Došlý, R. Hilscher: Linear Hamiltonian difference systems: transformations, recessive solutions, generalized reciprocity. Dynamical Systems and Applications 8 (1999), 401–420. MR 1722970 [8] F. Neuman: Global Properties of Linear Ordinary Differential Equations. Mathematics and Its Applications (East European Series), Kluwer Acad. Publ., Dordrecht, 1991. MR 1192133 | Zbl 0784.34009 [9] S. Staněk: On transformation of solutions of the differential equation $y^{\prime \prime }=Q(t)y$ with a complex coefficient of a real variable. Acta Univ. Palack. Olomucensis, F.R.N. 88 Math. 26 (1987), 57–83. MR 1033331 [10] P. Šarmanová: Otakar Borůvka and Differential Equations. PhD. thesis, MU, Brno, 1998. Partner of
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.968682050704956, "perplexity": 4355.722578338468}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704664826/warc/CC-MAIN-20130516114424-00026-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.mathlearnit.com/polynomial-fractions.html
# Polynomial Fractions Polynomial fractions, or fractions involving polynomials, can often be simplified and made neater. Examples (1.1) $\dpi{100}&space;\boldsymbol{\frac{3x-9}{3}}$ This can be slpit into 2 fractions, with the same denominator. $\dpi{90}&space;\boldsymbol{\frac{3x}{3}}$   $\dpi{90}&space;\boldsymbol{\frac{9}{3}}$ The 3's can be can cancelled out in the fraction on the left to leave just x. =>   $\dpi{90}&space;\boldsymbol{\frac{3x}{3}}$   $\dpi{90}&space;\boldsymbol{\frac{9}{3}}$   =   x3 (1.2) $\dpi{100}&space;\boldsymbol{\frac{4x+8}{2}}$ The same approach can be used again here. $\dpi{90}&space;\boldsymbol{\frac{4x}{2}}$ +  $\dpi{90}&space;\boldsymbol{\frac{8}{2}}$      =>     2x + 4 Could also have factored the top line in a slightly different, yet similar approach. $\dpi{100}&space;\boldsymbol{\frac{4x+8}{2}}$   =   $\dpi{100}&space;\boldsymbol{\frac{4x+8}{2}}$   =   2x + 4 ## Polynomial Fractions, Further Fractions involving polynomials can also be a bit more complex. When there are variables such as x in the denominator on the bottom line also. Examples (2.1) $\dpi{100}&space;\boldsymbol{\frac{9x^{3}+12x^{2}}{3x}}$ =>   $\dpi{100}&space;\boldsymbol{\frac{9x^{3}}{3x}}$  +  $\dpi{100}&space;\boldsymbol{\frac{12x^{2}}{3x}}$     =     $\dpi{100}&space;\boldsymbol{\frac{9x^{2}}{3}}$  +  $\dpi{100}&space;\boldsymbol{\frac{12x}{3}}$   =   3x2 + 4x (2.2) $\dpi{100}&space;\boldsymbol{\frac{5x^{3}+2x}{4x^{4}}}$ =>   $\dpi{100}&space;\boldsymbol{\frac{5x^{3}}{4x^{4}}}$  +  $\dpi{100}&space;\boldsymbol{\frac{2x}{4x^{4}}}$     =     $\dpi{100}&space;\boldsymbol{\frac{5x^{2}}{4x^{3}}}$  +  $\dpi{100}&space;\boldsymbol{\frac{2}{4x^{3}}}$   =   $\dpi{100}&space;\boldsymbol{\frac{5x^{2}+2}{4x^{3}}}$ (2.3) $\dpi{100}&space;\boldsymbol{\frac{6x^{2}-12x}{x-2}}$ The top line can be factored. $\dpi{100}&space;\boldsymbol{\frac{6x(x-2)}{(x-2)}}$    =   6x ›   ›  Fractions, Polynomial
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 23, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8480383157730103, "perplexity": 15976.58385370472}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591332.73/warc/CC-MAIN-20180719222958-20180720002958-00293.warc.gz"}
https://brilliant.org/discussions/thread/an-observation-by-myself/
× # An Observation by Myself ! This observation seems extremely intuitive, but is a lot helpful when dealing with most of the problems involving Second Principle of Mathematical Induction :- $$\boxed{a^{k}-b^{k} = (a^{k-n}-b^{k-n})(a^{n}+b^{n})-(ab)^{n}(a^{k-2n}-b^{k-2n})}$$ INMO Type Problem :- Let $$R = (5 \sqrt{5} + 11)^{2n+1}$$ and let $$f$$ be the fractional part of $$R$$. Show that $$Rf = 4^{2n+1}$$. By the way the above problem is not original. Note by Karthik Venkata 2 years, 9 months ago MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$...$$ or $...$ to ensure proper formatting. 2 \times 3 $$2 \times 3$$ 2^{34} $$2^{34}$$ a_{i-1} $$a_{i-1}$$ \frac{2}{3} $$\frac{2}{3}$$ \sqrt{2} $$\sqrt{2}$$ \sum_{i=1}^3 $$\sum_{i=1}^3$$ \sin \theta $$\sin \theta$$ \boxed{123} $$\boxed{123}$$ Sort by: $$R=I+f$$ $$0<5\sqrt 5 -11 <1$$ Let $$(5\sqrt 5 -11)^{2n+1}$$=$$f'$$ Now , $$I+f-f' = (5\sqrt 5+11)^{2n+1}-(5\sqrt 5 -11) ^{2n+1}= 2(^nC_1 (5\sqrt 5)^{2n}11 + ^n C3 (5\sqrt 5)^{2n-2}11^3 .............)$$ Here RHS is integer So Lhs should be too so $$f=f'$$ Hence $$Rf=Rf'=((5\sqrt 5 +11)^{2n+1}(5\sqrt 5 -11)^{2n+1}) = 4^{2n+1}$$ - 2 years, 9 months ago I think there is a fallacy :- $$I + f - f' =$$ an integer need not imply that $$f = f'$$. It may even be like $$f = 1 + f'$$ or $$f = 2 + f'$$ or generally $$f = n + f'$$ where $$n$$ is any integer, and still the fact that LHS ( as calculated by binomial expansion ) is an integer holds good. - 2 years, 9 months ago Hey man $$f$$ is fractional part of x hence $$0 \le f<1$$.Its a standard solution to these type of problems. - 2 years, 9 months ago I am not exposed to problems involving binomial expansion, sorry for troubling you ! I am getting confused.... - 2 years, 9 months ago How do you say that RHS is an integer ? There is no proper reason for your claim. And also please do not include facts like Binomial Expansion, as the aim of this note is to deal with elegant proofs of theorems that are direct consequences of Induction Axioms. - 2 years, 9 months ago Can u tell me this question came in which yr in INMO. - 2 years, 9 months ago It says that this is an INMO-type problem, I guess it never actually came. However, I have a book (Problem-Solving Strategies by Arthur Engel), that has a problem which uses a very similar approach (adding the conjugate of the irrational number) for a certain problem. In 1980, the oncoming IMO in Mongolia was cancelled on political grounds, Luxembourg hosted an Ersatz-IMO in Mersch. According to my book, that problem occured in this Ersatz-IMO in 1980. This was the problem: Find the first digit before and after the decimal in: ${ (\sqrt { 2 } +\sqrt { 3 } ) }^{ 1980 }$ - 2 years, 9 months ago No idea, there was no date in the book from which I came to know about the problem. - 2 years, 9 months ago RHS is an integer cause $$5\sqrt 5$$ always have even power and it is old IITJEE problem too.This method is absolutely correct .I subtracted those two terms to make RHS an integer.It is not a claim it is consequence of my subtraction - 2 years, 9 months ago Well thanks for explaining ! :) - 2 years, 9 months ago Great observation. The $$n = 1$$ case is similar to what we get from Newton's Identities, namely $a^k + b^k = ( a^{k-1} + b^{k-1} ) ( a + b ) - ( ab) ^ 2 ( a ^ {k-2} + b^ { k - 2 } ).$ Staff - 2 years, 9 months ago Thanks Sir ! There is a really close connection between my observation and Newton's Identities, although they are used for completely different purposes ! - 2 years, 9 months ago
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9925660490989685, "perplexity": 1886.3341131665522}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887077.23/warc/CC-MAIN-20180118071706-20180118091706-00659.warc.gz"}
https://www.physicsforums.com/threads/real-analysis-monotone-subsequence.185587/
# Homework Help: Real analysis monotone subsequence 1. Sep 18, 2007 ### Scousergirl 1. The problem statement, all variables and given/known data Prove: Let (Xn) be a sequence in R (reals). Then (Xn) has a monotone subsequence. 2. Relevant equations Def: Monotone: A sequence is monotone if it increases or decreases. 3. The attempt at a solution I know it has something to do with peak points...that is there are elements in (Xn) which are peak points (every element afterwards is smaller). There are either an infinite number of peak points (in which case the subsequence consists of the peak points) of finite. I am having a hard time grasping what the subsequence consists of if there are a finite number of peak points... 2. Sep 18, 2007 ### StatusX If the sequence is unbounded, the result is easy. If it is bounded, it has a convergent subsequence. See if you can make this into a monotone subsequence.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8944358825683594, "perplexity": 601.5127369473007}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864790.28/warc/CC-MAIN-20180522131652-20180522151652-00118.warc.gz"}
https://chemistry.stackexchange.com/questions/51141/why-do-ionisations-from-non-bonding-orbitals-result-in-little-no-vibrational-str
# Why do ionisations from non-bonding orbitals result in little/no vibrational structure in a photoelectron spectrum? I understand that ionisations from bonding and antibonding orbitals create vibrational structure from the Franck-Condon principle, resulting in a greater overlap integral between different vibrational levels, but surely if a non bonding orbital is ionised the equilibrium bond length will not change (much), resulting in a good overlap integral between the vibrational levels? If someone could explain the principle to me in this context I would be very grateful. • The idea is that your molecule, prior to ionisation, is nearly always in a vibrational ground state. The intensity of the line is therefore governed by the overlap of the vibrational level of the ionised molecule with the ground-state vibrational level of the unionised molecule. For ionisation from a nonbonding orbital, this is largest for $v' = 0$ i.e. transition to vibrational ground state of ionised molecule; from then on the overlap (and intensity) just decreases – orthocresol May 13 '16 at 15:55 • whereas for ionisation from bonding/antibonding the Franck-Condon factor is largest for $v' \neq 0$ - this means that your intensities go up to a certain point, and then only decrease – orthocresol May 13 '16 at 15:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6666375994682312, "perplexity": 1497.426160249395}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144167.31/warc/CC-MAIN-20200219184416-20200219214416-00086.warc.gz"}
http://physics.stackexchange.com/questions/31448/how-can-a-particle-with-no-size-have-angular-momentum
# How can a particle with no size have angular momentum? I was recently reading about Higgs boson and particle spin and I stumbled upon a question that explains what is spin. It explains that electrons have no size yet they have angular momentum. I don't understand what exactly is meant by that. Does it refer to the angular momentum of the magnetic field? I just don't see how something with no size can have any sort of angular momentum. - You've check marked wrong answer. Spin is an attribute of individual electron.. not of a collection of electrons. –  SS-3.1415926535897932384626433 Jul 7 '12 at 10:10 Please do not accept wrong/vacuous answers, it dilutes the value of the site. Sachin Shekar's answer is not good. The spin angular momentum is a real honest to goodness angular momentum, not a mathematical analogy. It can be seen in the Einstein deHaas effect. –  Ron Maimon Jul 8 '12 at 7:33 Thank you for your comment I will look into it. I do not know enough about it to decide which answer is the best yet but i will read the Einstein deHaas effect and then try to judge. I dont know what to do in a situation like this If there is a way to start a discussion about which answer is better or have community resolve this problem in another way. Otherwise I will do my best. –  Xitcod13 Jul 8 '12 at 20:58 As a answer, first I'd like to ask why you're asking a quantum mechanics problem with classical mechanics mental model. There are two types of angular momentum in quantum mechanics: 1. Orbital angular momentum, which is a generalization of angular momentum in classical mechanics (L=r×p). I think, you shouldn't have problem with this because Orbital has size. 2. Spin, which has no analogue in classical mechanics. You can understand it as a number appeared in quantum equation. It can be understood like charge(with physical dimension), which is a number to denote one of basic attributes of particles. Yes, Spin does have physical dimension of angular momentum. But, its because it is a type of angular momentum, mathematically. - to answer your first question is because i would understand quantum mechanics i would not ask questions about it. Q.M. has a lot of confusing vocabulary which means something else in "regular" english. Also what exactly do you mean by "which is a number (with physical dimensions)" numbers cant have physical dimensions. You mean the charge with physical dimensions? –  Xitcod13 Jul 6 '12 at 13:08 @Xitcod13 I meant charge is ((just a number)) with physical dimension.. Ofcourse, numbers don't have physical dimensions. :) –  SS-3.1415926535897932384626433 Jul 6 '12 at 13:17 Spin is not a type of angular momentum, it is angular momentum, period. This is demonstrated by the Einstein deHaas experiment detailed in my answer. –  Ron Maimon Jul 8 '12 at 6:45 @Ron That experiment is out-dated. At that time, there wasn't any concept of Orbital. –  SS-3.1415926535897932384626433 Jul 8 '12 at 7:21 It means exactly what it says--- the point particle has an angular momentum. In quantum mechanics, angular momentum is dimensionless (since hbar has units of angular momentum), and saying the spinning electron has angular momentum means that if you have a large number of electrons with spin up sitting on a disk (like a disk magnetized with a B field going in one direction perpendicular to the disk), and you suddenly reverse the B, so that all the electrons flip their spin to the other direction, then the disk starts spinning to conserve the angular momentum of the flip. This is the famous Einstein deHaas experiment that established that magnetization is carried by electron spin. - @SachinShekhar: This is incorrect--- the transfer is not due to field angular momentum, it isn't a macroscopic spinning up of the magnet because of field action. It's to do with the spin angular momentum of the electrons. When you flip the spin of all the electrons in a magnet, you start the magnet spinning macroscopically due to the angular momentum of the spinning electrons. The experimental paper might have assumed an electron radius (I didn't read it), but it doesn't need to, all it is using is that electrons have angular momentum. It's a reproduced classic experiment, why not use it? –  Ron Maimon Jul 7 '12 at 19:56 @SachinShekhar: yes, you are changing the spin of a sizable fraction of the electrons in the magnet by changing the direction of B. This is how it is done in a lab. "All" the electrons can have the same spin, they have different positions. By "all" I don't literally mean every single one, just the ones involved in making the magnet magnetized. When you flip B, you flip the magnetization, 2% of the electrons flip their spin, and the bar starts to rotate with the exact amount of angular momentum lost by the electrons. I prefer thinking to reading. –  Ron Maimon Jul 7 '12 at 20:40 @SachinShekhar: They are clear, you just didn't understand them. Two electrons in the "same quantum state" have the same position distribution as well as spin. Electrons on different atoms can have the same spin. The electrons on different atoms in a magnet are spinning in the same direction, this is why you have magnetization. When you flip the magnetization, you can detect the change in angular momentum of the electrons--- the magnet rotates. –  Ron Maimon Jul 7 '12 at 21:09 @SachinShekhar: The electrons with the same spin in a ferromagnetic are on different atoms. –  Ron Maimon Jul 8 '12 at 6:42 @SachinShekhar: I have not "accepted" anything! You have just misinterpreted my comments in absurd ways. Of course not all the electrons have the same spins, all the unpaired electrons have the same spins, the unpaired electrons in interior orbitals are responsible for magnetism. Orbital angular momentum of these electrons does not contribute to magnetism angular momentum change, because when you change the magnetism, the orbital is exactly the same, only the spin of the electron changes. This is the reason the Einstein deHaas experiment works to measure electron spin angular momentum. –  Ron Maimon Jul 8 '12 at 7:11
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8875793814659119, "perplexity": 517.6975074866831}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736676977.21/warc/CC-MAIN-20151001215756-00122-ip-10-137-6-227.ec2.internal.warc.gz"}
https://www.mathdoubts.com/matrix/representation/
Representation of a Matrix A system of representing a matrix by a symbol is called representation of a matrix. A matrix with its elements are expressed in matrix form by using either square or round brackets. $\left[\begin{array}{ccccc}4& 9& 1& 0& 6\\ 2& 4& 6& 8& 9\\ 1& 3& 2& 5& 7\end{array}\right]$ (or) $\left(\begin{array}{ccccc}4& 9& 1& 0& 6\\ 2& 4& 6& 8& 9\\ 1& 3& 2& 5& 7\end{array}\right)$ It consumes more time to express any matrix more times in mathematics. Instead, matrices are represented by the symbols. Matrices are usually denoted by symbols such as alphabetic letters $A,B,C$ and etc. Therefore, consider a letter and equate it to the matrix. $R=\left[\begin{array}{ccccc}4& 9& 1& 0& 6\\ 2& 4& 6& 8& 9\\ 1& 3& 2& 5& 7\end{array}\right]$ Now, just write the alphabetic character $R$ instead of writing the entire matrix in the matrices system of mathematics.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9803483486175537, "perplexity": 630.9938527645935}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583657557.2/warc/CC-MAIN-20190116175238-20190116201238-00010.warc.gz"}
http://mathhelpforum.com/math-topics/53809-quadratic-formula-print.html
• October 15th 2008, 01:15 AM xwrathbringerx Are the solutions to x^2 - 4x - 8 really (4 +/- sqrt(48))/2? Because everytime I try working it out using the quadratic formula, my answer comes out with a 16 instead of the 4? (Worried)?? • October 15th 2008, 01:36 AM mr fantastic Quote: Originally Posted by xwrathbringerx Are the solutions to x^2 - 4x - 8 really (4 +/- sqrt(48))/2? Because everytime I try working it out using the quadratic formula, my answer comes out with a 16 instead of the 4? (Worried)?? Yes. $x = \frac{-(-4) \pm \sqrt{(-4)^2 - 4(1)(-8)}}{2} = \frac{4 \pm \sqrt{16 + 32}}{2} = \frac{4 \pm \sqrt{48}}{2}$ Note: $= \frac{4 \pm \sqrt{16 \times 3}}{2} = \frac{4 \pm 4 \sqrt{3}}{2} = 2 \pm 2 \sqrt{3}$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7668570280075073, "perplexity": 1010.9779155820294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936469305.48/warc/CC-MAIN-20150226074109-00105-ip-10-28-5-156.ec2.internal.warc.gz"}
https://plainmath.net/28160/what-the-probability-that-randomly-selected-student-from-this-school-boy
# What is the probability that a randomly selected student from this school is a boy? The following two-way table displays information about favorite sports cars that resulted from a survey given to all students at Shore High School. $$\begin{array}{|c|c|}\hline & \text{Corvette (C)} & \text{Porsche (P)} & \text{Ferrari (F)} & \text{Total} \\ \hline \text{Boys (B)} & 90 & 60 & 120 & 270 \\ \hline \text{Girls (G)} & 110 & 141 & 79 & 330 \\ \hline \text{Total} & 200 & 201 & 199 & 600 \\ \hline \end{array}$$ What is the probability that a randomly selected student from this school is a boy? • Live experts 24/7 • Questions are typically answered in as fast as 30 minutes • Personalized clear answers ### Plainmath recommends • Get a detailed answer even on the hardest topics. • Ask an expert for a step-by-step guidance to learn to do it yourself. Brighton Formula: P(B) = (total number of boys) + (total) Identify the variables: Total number of boys = 270 Total = 600 P(B)=? Substitute: $$\displaystyle{P}{\left({B}\right)}=\frac{{270}}{{600}}=\frac{{9}}{{20}}$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30023229122161865, "perplexity": 1860.8853920516556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362918.89/warc/CC-MAIN-20211203182358-20211203212358-00471.warc.gz"}
http://mathoverflow.net/questions/102243/reduction-of-endomorphism-ring-of-non-cm-elliptic-curve
# Reduction of endomorphism ring of Non-CM elliptic curve Let $E$ be an elliptic curve defined over a number field without complex multiplication and with ordinary reduction at a prime $p\in\mathbb{N}$. When is the reduction mod $p$ map a surjection on the endomorphism ring I.e. $\overline{End(E)} \cong End(\overline{E})$? - Never. An elliptic curve over a finite field always has an extra endomorphism, namely Frobenius. –  user18237 Jul 14 '12 at 18:08 Perhaps it is worth noting that Frobenius can be a rational integer (necessarily $\pm q^{n/2}$) but of course this forces the curve to be supersingular. –  user18237 Jul 14 '12 at 19:18 A reference for gb's comment: Lang's "Elliptic Functions", Chapter 13, Section 2, Theorem 5. –  Álvaro Lozano-Robledo Jul 27 '12 at 14:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9082573652267456, "perplexity": 515.1087582465578}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246654467.42/warc/CC-MAIN-20150417045734-00144-ip-10-235-10-82.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/elementary-and-intermediate-algebra-concepts-and-applications-6th-edition/chapter-1-introduction-to-algebraic-expressions-1-2-the-commutative-associative-and-distributive-laws-1-2-exercise-set-page-16/10
## Elementary and Intermediate Algebra: Concepts & Applications (6th Edition) Published by Pearson # Chapter 1 - Introduction to Algebraic Expressions - 1.2 The Commutative, Associative, and Distributive Laws - 1.2 Exercise Set: 10 distributive #### Work Step by Step $2a+2b=2(a+b)$ is an example of the distributive property. After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32716500759124756, "perplexity": 1720.6152514072837}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593051.79/warc/CC-MAIN-20180722061341-20180722081341-00372.warc.gz"}
http://mathoverflow.net/questions/114518/a-question-of-line-bundle-for-finite-etale-covering
# A question of line bundle for finite etale covering Let $X$ be a smooth curve over a algebraically closed field $k$, and $f:Y \longrightarrow X$ a Galois finite etale covering with Galois group $G$ and degree $n$. Suppose that $L$ is a line bundle on $X$. Dose there exist a line bundle $M$ on $Y$ such that $M^{\otimes n}=f^{*}L$? Let $C/k$ be a smooth projective curve. Let $U/C$ be a line bundle. If $n|{\rm deg}(U)$ then there exists a line bundle $R$ such that $R^{\otimes n}=U$. This follows from the fact that multiplication by $n$ is an isogeny on the ${\rm Pic}_0(C)$ and thus the tensoring by $n$ morphism ${\rm Pic}_{{\rm deg}(U)/n}\to {\rm Pic}_{{\rm deg}(U)}$ is surjective. This applies to your set-up, with $Y=C$ and $U=f^*L$. See also mathoverflow.net/questions/44692/… –  Damian Rössler Nov 26 '12 at 12:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9971431493759155, "perplexity": 61.47900167982851}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931007324.75/warc/CC-MAIN-20141125155647-00046-ip-10-235-23-156.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/3090353/how-to-write-summation-of-squared-divergence-terms-in-index-summation-notation/3094218
# How to write summation of squared divergence terms in index summation notation? Sorry if this has already been asked before, but it's really difficult to try and explain the problem in words. Anyways, I want to express the following: $$\phi = \left(\frac{\partial u_1}{\partial x_1}\right)^2 + \left(\frac{\partial u_2}{\partial x_2}\right)^2 + \left(\frac{\partial u_3}{\partial x_3}\right)^2$$ My first instinct would be to square the divergence of $$u_i$$, but this would result in: $$\left(\frac{\partial u_k}{\partial x_k}\right)^2 = \left(\frac{\partial u_1}{\partial x_1} + \frac{\partial u_2}{\partial x_2}+ \frac{\partial u_3}{\partial x_3}\right)^2$$ I've narrowed it down to either of the following, but they both seem more like "hacks" than correct. I've also included the general line of thinking behind the ideas as well. ## Idea 1: $$\text{Idea 1: }\quad \frac{\partial (u_k)^2}{\partial (x_k)^2}=\frac{\partial (u_1)^2}{\partial (x_1)^2} + \frac{\partial (u_2)^2}{\partial (x_2)^2} + \frac{\partial (u_3)^2}{\partial (x_3)^2} = \left(\frac{\partial u_1}{\partial x_1}\right)^2 + \left(\frac{\partial u_2}{\partial x_2}\right)^2 + \left(\frac{\partial u_3}{\partial x_3}\right)^2$$ This being analogous to: $$\text{Inspiration for Idea 1:} \quad \frac{(A)^2}{(B)^2} = \left(\frac{A}{B}\right)^2$$ ## Idea 2: $$\text{Idea 2: } \quad \frac{\partial u_k}{\partial x_k}\frac{\partial u_k}{\partial x_k} = \frac{\partial u_1}{\partial x_1}\frac{\partial u_1}{\partial x_1} + \frac{\partial u_2}{\partial x_2}\frac{\partial u_2}{\partial x_2} + \frac{\partial u_3}{\partial x_3}\frac{\partial u_3}{\partial x_3} = \left(\frac{\partial u_1}{\partial x_1}\right)^2 + \left(\frac{\partial u_2}{\partial x_2}\right)^2 + \left(\frac{\partial u_3}{\partial x_3}\right)^2$$ This idea was inspired from the incompressible form of the shear stress equations for fluids: $$\text{Inspiration for Idea 2:} \quad \frac{\partial^2 \tau_{ij}}{\partial x_j \partial x_j} = \frac{\partial^2 \tau_{i1}}{\partial x_1^2} + \frac{\partial^2 \tau_{i2}}{\partial x_2^2} + \frac{\partial^2 \tau_{i3}}{\partial x_3^2}$$ On a slightly related note, if anyone knows of a good resource that goes over common index summation notation forms of common math expressions or the "rules" of index summation notation, let me know! Using the Kronecker delta, $$\delta_{ij}$$: $$\delta_{ij} =\left\{ \begin{array}{ll} 0 & i\not=j \\ 1 & i=j \\ \end{array} \right.$$ $$\left(\frac{\partial u_1}{\partial x_1}\right)^2 + \left(\frac{\partial u_2}{\partial x_2}\right)^2 + \left(\frac{\partial u_3}{\partial x_3}\right)^2 = \frac{\partial u_j}{\partial x_k} \frac{\partial u_j}{\partial x_k} \delta_{jk}$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 10, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9751848578453064, "perplexity": 569.117936394461}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370496901.28/warc/CC-MAIN-20200330085157-20200330115157-00156.warc.gz"}
https://www.maplesoft.com/support/help/Maple/view.aspx?path=Groebner%2FWalk
Walk - Maple Help Groebner Walk convert Groebner bases from one ordering to another Calling Sequence Walk(G, T1, T2, opts) Parameters G - Groebner basis with respect to starting order T1 or a PolynomialIdeal T1,T2 - monomial orders (of type ShortMonomialOrder) opts - optional arguments of the form keyword=value Description • The Groebner walk algorithm converts a Groebner basis of commutative polynomials from one monomial order to another.  It is frequently applied when a Groebner basis is too difficult to compute directly. • The Walk command takes as input a Groebner basis G with respect to a monomial order T1, and outputs the reduced Groebner basis for G with respect to T2.  If the first argument G is a PolynomialIdeal then a Groebner basis for G with respect to T1 is computed if one is not already known. • The orders T1 and T2 must be proper monomial orders on the polynomial ring, so 'min' orders such as 'plex_min' and 'tdeg_min' are not supported. Walk does not check that G is a Groebner basis with respect to T1. • Unlike FGLM, the ideal defined by G can have an infinite number of solutions. The Groebner walk is typically not as fast as FGLM on zero-dimensional ideals. • The optional argument characteristic=p specifies the characteristic of the coefficient field. The default is zero.  This option is ignored if G is a PolynomialIdeal. • The optional argument elimination=true forces the Groebner walk to terminate early, before a Groebner basis with respect to T2 is obtained.  If T2 is a lexdeg order with two blocks of variables the resulting list will contain a generating set of the elimination ideal. • The optional argument output=basislm returns the basis in an extended format containing leading monomials and coefficients.  Each element is a list of the form [leading coefficient, leading monomial, polynomial]. • Setting infolevel[Walk] to a positive integer value directs the Walk command to output increasingly detailed information about its performance and progress. Examples > $\mathrm{with}\left(\mathrm{Groebner}\right):$ > $\mathrm{F1}≔\left[10xz-6{x}^{3}-8{y}^{2}{z}^{2},-6z+5{y}^{3}\right]$ ${\mathrm{F1}}{≔}\left[{-}{8}{}{{y}}^{{2}}{}{{z}}^{{2}}{-}{6}{}{{x}}^{{3}}{+}{10}{}{x}{}{z}{,}{5}{}{{y}}^{{3}}{-}{6}{}{z}\right]$ (1) > $\mathrm{G1}≔\mathrm{Basis}\left(\mathrm{F1},\mathrm{tdeg}\left(x,y,z\right)\right)$ ${\mathrm{G1}}{≔}\left[{5}{}{{y}}^{{3}}{-}{6}{}{z}{,}{4}{}{{y}}^{{2}}{}{{z}}^{{2}}{+}{3}{}{{x}}^{{3}}{-}{5}{}{x}{}{z}{,}{15}{}{{x}}^{{3}}{}{y}{-}{25}{}{x}{}{y}{}{z}{+}{24}{}{{z}}^{{3}}{,}{45}{}{{x}}^{{6}}{-}{96}{}{y}{}{{z}}^{{5}}{-}{150}{}{{x}}^{{4}}{}{z}{+}{125}{}{{x}}^{{2}}{}{{z}}^{{2}}\right]$ (2) > $\mathrm{Walk}\left(\mathrm{G1},\mathrm{tdeg}\left(x,y,z\right),\mathrm{plex}\left(x,y,z\right)\right)$ $\left[{5}{}{{y}}^{{3}}{-}{6}{}{z}{,}{4}{}{{y}}^{{2}}{}{{z}}^{{2}}{+}{3}{}{{x}}^{{3}}{-}{5}{}{x}{}{z}\right]$ (3) > $\mathrm{alias}\left(\mathrm{\alpha }=\mathrm{RootOf}\left({z}^{2}+z+5\right)\right)$ ${\mathrm{\alpha }}$ (4) > $\mathrm{F2}≔\left[-10yx-9{x}^{3}+2z{\mathrm{\alpha }}^{2}-4{y}^{3}\mathrm{\alpha },6{\mathrm{\alpha }}^{2}-2{x}^{2}\mathrm{\alpha }+9x{\mathrm{\alpha }}^{2}-8{y}^{3}x\right]:$ > $\mathrm{G2}≔\mathrm{Basis}\left(\mathrm{F2},\mathrm{tdeg}\left(x,y,z\right)\right)$ ${\mathrm{G2}}{≔}\left[{10}{}{y}{}{x}{+}\left({2}{}{\mathrm{\alpha }}{+}{10}\right){}{z}{+}{9}{}{{x}}^{{3}}{+}{4}{}{{y}}^{{3}}{}{\mathrm{\alpha }}{,}{6}{}{\mathrm{\alpha }}{+}{30}{+}{2}{}{{x}}^{{2}}{}{\mathrm{\alpha }}{+}{8}{}{{y}}^{{3}}{}{x}{+}\left({9}{}{\mathrm{\alpha }}{+}{45}\right){}{x}{,}{-}{24}{}{\mathrm{\alpha }}{+}{30}{+}{32}{}{{y}}^{{6}}{+}\left({56}{}{\mathrm{\alpha }}{+}{10}\right){}{{x}}^{{2}}{-}{16}{}{\mathrm{\alpha }}{}{{y}}^{{3}}{}{z}{+}\left({-}{72}{}{\mathrm{\alpha }}{+}{90}\right){}{z}{+}\left({-}{36}{}{\mathrm{\alpha }}{+}{45}\right){}{x}{+}{60}{}{\mathrm{\alpha }}{}{y}{+}\left({36}{}{\mathrm{\alpha }}{+}{180}\right){}{{y}}^{{3}}{+}\left({4}{}{\mathrm{\alpha }}{+}{20}\right){}{x}{}{z}\right]$ (5) > $\mathrm{Walk}\left(\mathrm{G2},\mathrm{tdeg}\left(x,y,z\right),\mathrm{plex}\left(x,y,z\right)\right)$ $\left[{9216}{}{{y}}^{{12}}{-}{4608}{}{\mathrm{\alpha }}{}{{y}}^{{9}}{}{z}{+}{31104}{}{\mathrm{\alpha }}{}{{y}}^{{9}}{+}{155520}{}{{y}}^{{9}}{+}{16640}{}{\mathrm{\alpha }}{}{{y}}^{{7}}{-}{62208}{}{\mathrm{\alpha }}{}{{y}}^{{6}}{}{z}{+}{293616}{}{\mathrm{\alpha }}{}{{y}}^{{6}}{-}{3200}{}{{y}}^{{7}}{+}{77760}{}{{y}}^{{6}}{}{z}{+}{1280}{}{\mathrm{\alpha }}{}{{y}}^{{4}}{}{z}{+}{724480}{}{{y}}^{{6}}{+}{149040}{}{\mathrm{\alpha }}{}{{y}}^{{4}}{-}{215080}{}{\mathrm{\alpha }}{}{{y}}^{{3}}{}{z}{-}{1600}{}{{y}}^{{4}}{}{z}{+}{670680}{}{{y}}^{{3}}{}{\mathrm{\alpha }}{-}{208800}{}{{y}}^{{4}}{+}{732600}{}{z}{}{{y}}^{{3}}{-}{4800}{}{\mathrm{\alpha }}{}{{y}}^{{2}}{+}{3960}{}{\mathrm{\alpha }}{}{y}{}{z}{+}{896}{}{\mathrm{\alpha }}{}{{z}}^{{2}}{+}{984150}{}{{y}}^{{3}}{+}{298890}{}{\mathrm{\alpha }}{}{y}{-}{156735}{}{\mathrm{\alpha }}{}{z}{+}{6000}{}{{y}}^{{2}}{-}{16200}{}{y}{}{z}{+}{880}{}{{z}}^{{2}}{+}{96228}{}{\mathrm{\alpha }}{-}{854550}{}{y}{+}{1676700}{}{z}{-}{393660}{,}{-}{314125516800}{}{\mathrm{\alpha }}{}{{y}}^{{9}}{}{{z}}^{{4}}{-}{15975784120320}{}{\mathrm{\alpha }}{}{{y}}^{{10}}{}{{z}}^{{2}}{-}{5536014336000}{}{\mathrm{\alpha }}{}{{y}}^{{9}}{}{{z}}^{{3}}{+}{300987187200}{}{{y}}^{{9}}{}{{z}}^{{4}}{-}{546360629280768}{}{\mathrm{\alpha }}{}{{y}}^{{11}}{-}{32882551603200}{}{\mathrm{\alpha }}{}{{y}}^{{10}}{}{z}{-}{10435297443840}{}{\mathrm{\alpha }}{}{{y}}^{{9}}{}{{z}}^{{2}}{-}{322486272000}{}{\mathrm{\alpha }}{}{{y}}^{{6}}{}{{z}}^{{5}}{-}{8668550430720}{}{{y}}^{{10}}{}{{z}}^{{2}}{-}{14135648256000}{}{{y}}^{{9}}{}{{z}}^{{3}}{+}{1967799836731392}{}{\mathrm{\alpha }}{}{{y}}^{{10}}{+}{1947158001561600}{}{\mathrm{\alpha }}{}{{y}}^{{9}}{}{z}{-}{4917019852800}{}{\mathrm{\alpha }}{}{{y}}^{{7}}{}{{z}}^{{3}}{+}{1310100480000}{}{\mathrm{\alpha }}{}{{y}}^{{6}}{}{{z}}^{{4}}{-}{1139103470665728}{}{{y}}^{{11}}{-}{359455142707200}{}{{y}}^{{10}}{}{z}{-}{524766413168640}{}{{y}}^{{9}}{}{{z}}^{{2}}{-}{725594112000}{}{{y}}^{{6}}{}{{z}}^{{5}}{+}{27199544370884352}{}{\mathrm{\alpha }}{}{{y}}^{{9}}{+}{235447336673280}{}{\mathrm{\alpha }}{}{{y}}^{{8}}{}{z}{-}{14674245120000}{}{\mathrm{\alpha }}{}{{y}}^{{7}}{}{{z}}^{{2}}{+}{163796824166400}{}{\mathrm{\alpha }}{}{{y}}^{{6}}{}{{z}}^{{3}}{-}{37324800000}{}{\mathrm{\alpha }}{}{{y}}^{{3}}{}{{z}}^{{6}}{-}{7560832848058368}{}{{y}}^{{10}}{-}{670150659686400}{}{{y}}^{{9}}{}{z}{-}{39028901068800}{}{{y}}^{{7}}{}{{z}}^{{3}}{-}{7759825920000}{}{{y}}^{{6}}{}{{z}}^{{4}}{-}{7519204604620800}{}{\mathrm{\alpha }}{}{{y}}^{{8}}{+}{3675750562759680}{}{\mathrm{\alpha }}{}{{y}}^{{7}}{}{z}{+}{227541778560000}{}{\mathrm{\alpha }}{}{{y}}^{{6}}{}{{z}}^{{2}}{+}{66355200000}{}{\mathrm{\alpha }}{}{{y}}^{{4}}{}{{z}}^{{4}}{-}{4534963200000}{}{\mathrm{\alpha }}{}{{y}}^{{3}}{}{{z}}^{{5}}{+}{32041679675265792}{}{{y}}^{{9}}{-}{1409502931845120}{}{{y}}^{{8}}{}{z}{-}{42639851520000}{}{{y}}^{{7}}{}{{z}}^{{2}}{-}{175841686425600}{}{{y}}^{{6}}{}{{z}}^{{3}}{-}{37324800000}{}{{y}}^{{3}}{}{{z}}^{{6}}{+}{2614664021875200}{}{\mathrm{\alpha }}{}{{y}}^{{7}}{+}{16968987770878080}{}{\mathrm{\alpha }}{}{{y}}^{{6}}{}{z}{+}{5039700480000}{}{\mathrm{\alpha }}{}{{y}}^{{5}}{}{{z}}^{{2}}{-}{129548782080000}{}{\mathrm{\alpha }}{}{{y}}^{{4}}{}{{z}}^{{3}}{+}{21017180160000}{}{\mathrm{\alpha }}{}{{y}}^{{3}}{}{{z}}^{{4}}{-}{7353752910796800}{}{{y}}^{{8}}{+}{257997550049280}{}{{y}}^{{7}}{}{z}{-}{1314534666240000}{}{{y}}^{{6}}{}{{z}}^{{2}}{-}{213580800000}{}{{y}}^{{4}}{}{{z}}^{{4}}{-}{7054387200000}{}{{y}}^{{3}}{}{{z}}^{{5}}{-}{629856000000}{}{\mathrm{\alpha }}{}{x}{}{{z}}^{{5}}{+}{321705392979406400}{}{\mathrm{\alpha }}{}{{y}}^{{6}}{-}{1018259361792000}{}{\mathrm{\alpha }}{}{{y}}^{{5}}{}{z}{+}{347786987328000}{}{\mathrm{\alpha }}{}{{y}}^{{4}}{}{{z}}^{{2}}{+}{2203937994240000}{}{\mathrm{\alpha }}{}{{y}}^{{3}}{}{{z}}^{{3}}{+}{139968000000}{}{\mathrm{\alpha }}{}{y}{}{{z}}^{{5}}{-}{209952000000}{}{\mathrm{\alpha }}{}{{z}}^{{6}}{+}{46656000000}{}{x}{}{{z}}^{{6}}{-}{107487335683180800}{}{{y}}^{{7}}{+}{39860784187015680}{}{{y}}^{{6}}{}{z}{-}{2537233920000}{}{{y}}^{{5}}{}{{z}}^{{2}}{-}{380772679680000}{}{{y}}^{{4}}{}{{z}}^{{3}}{-}{209844224640000}{}{{y}}^{{3}}{}{{z}}^{{4}}{-}{18743464800000}{}{\mathrm{\alpha }}{}{x}{}{{z}}^{{4}}{-}{39416496242604800}{}{\mathrm{\alpha }}{}{{y}}^{{5}}{+}{45006197725728000}{}{\mathrm{\alpha }}{}{{y}}^{{4}}{}{z}{+}{18852048082512000}{}{\mathrm{\alpha }}{}{{y}}^{{3}}{}{{z}}^{{2}}{+}{4076179200000}{}{\mathrm{\alpha }}{}{{y}}^{{2}}{}{{z}}^{{3}}{+}{58320000000}{}{\mathrm{\alpha }}{}{y}{}{{z}}^{{4}}{-}{15446376000000}{}{\mathrm{\alpha }}{}{{z}}^{{5}}{+}{88088744959114400}{}{{y}}^{{6}}{-}{18392297260032000}{}{{y}}^{{5}}{}{z}{-}{2230302304512000}{}{{y}}^{{4}}{}{{z}}^{{2}}{-}{3045221256960000}{}{{y}}^{{3}}{}{{z}}^{{3}}{+}{139968000000}{}{y}{}{{z}}^{{5}}{+}{94478400000000}{}{\mathrm{\alpha }}{}{x}{}{{z}}^{{3}}{-}{23046304810528800}{}{\mathrm{\alpha }}{}{{y}}^{{4}}{+}{163552851034668000}{}{\mathrm{\alpha }}{}{{y}}^{{3}}{}{z}{-}{32611248000000}{}{\mathrm{\alpha }}{}{{y}}^{{2}}{}{{z}}^{{2}}{-}{528160248000000}{}{\mathrm{\alpha }}{}{y}{}{{z}}^{{3}}{-}{2927664000000}{}{\mathrm{\alpha }}{}{{z}}^{{4}}{-}{38688904800000}{}{x}{}{{z}}^{{4}}{-}{26046898913260800}{}{{y}}^{{5}}{-}{25470717458112000}{}{{y}}^{{4}}{}{z}{+}{31476478240152000}{}{{y}}^{{3}}{}{{z}}^{{2}}{+}{11844403200000}{}{{y}}^{{2}}{}{{z}}^{{3}}{+}{6298560000000}{}{y}{}{{z}}^{{4}}{-}{17937936000000}{}{{z}}^{{5}}{+}{5550200727030000}{}{\mathrm{\alpha }}{}{x}{}{{z}}^{{2}}{+}{841574249783608500}{}{{y}}^{{3}}{}{\mathrm{\alpha }}{-}{11336483045680000}{}{\mathrm{\alpha }}{}{{y}}^{{2}}{}{z}{-}{1020432054600000}{}{\mathrm{\alpha }}{}{y}{}{{z}}^{{2}}{+}{6063266599200000}{}{\mathrm{\alpha }}{}{{z}}^{{3}}{-}{737167716000000}{}{x}{}{{z}}^{{3}}{-}{568616554967464800}{}{{y}}^{{4}}{+}{682176021898628000}{}{z}{}{{y}}^{{3}}{+}{94950792000000}{}{{y}}^{{2}}{}{{z}}^{{2}}{-}{696965508000000}{}{y}{}{{z}}^{{3}}{-}{870738012000000}{}{{z}}^{{4}}{+}{38444953961075000}{}{\mathrm{\alpha }}{}{x}{}{z}{-}{104840997530040000}{}{\mathrm{\alpha }}{}{{y}}^{{2}}{+}{95020410601170000}{}{\mathrm{\alpha }}{}{y}{}{z}{+}{85139930065725000}{}{\mathrm{\alpha }}{}{{z}}^{{2}}{-}{3875300052120000}{}{x}{}{{z}}^{{2}}{-}{307559853657459000}{}{{y}}^{{3}}{-}{49118173405280000}{}{{y}}^{{2}}{}{z}{-}{8777828946600000}{}{{z}}^{{2}}{}{y}{-}{12373762321800000}{}{{z}}^{{3}}{+}{94086819258781500}{}{\mathrm{\alpha }}{}{x}{-}{200252907899415000}{}{\mathrm{\alpha }}{}{y}{+}{597219742017708750}{}{\mathrm{\alpha }}{}{z}{+}{84698873079075000}{}{x}{}{z}{-}{22027797731340000}{}{{y}}^{{2}}{-}{187882578562680000}{}{y}{}{z}{+}{109342506627225000}{}{{z}}^{{2}}{-}{17319759203700000}{}{\mathrm{\alpha }}{+}{991358206562039625}{}{x}{-}{1534072805076652500}{}{y}{+}{2118938395306196250}{}{z}{+}{48514014178143750}{,}{40326912}{}{\mathrm{\alpha }}{}{{y}}^{{9}}{-}{205141248}{}{{y}}^{{9}}{+}{121150080}{}{\mathrm{\alpha }}{}{{y}}^{{6}}{}{z}{-}{51710400}{}{\mathrm{\alpha }}{}{{y}}^{{6}}{+}{78382080}{}{{y}}^{{6}}{}{z}{+}{10425600}{}{\mathrm{\alpha }}{}{{y}}^{{3}}{}{{z}}^{{2}}{-}{2779336800}{}{{y}}^{{6}}{-}{460252800}{}{\mathrm{\alpha }}{}{{y}}^{{4}}{+}{1331445600}{}{\mathrm{\alpha }}{}{{y}}^{{3}}{}{z}{-}{3960000}{}{{y}}^{{3}}{}{{z}}^{{2}}{-}{3596400}{}{\mathrm{\alpha }}{}{x}{}{{z}}^{{2}}{-}{1130633900}{}{{y}}^{{3}}{}{\mathrm{\alpha }}{-}{378064800}{}{{y}}^{{4}}{-}{246564000}{}{z}{}{{y}}^{{3}}{+}{42460200}{}{\mathrm{\alpha }}{}{x}{}{z}{-}{39096000}{}{\mathrm{\alpha }}{}{y}{}{z}{+}{42460200}{}{\mathrm{\alpha }}{}{{z}}^{{2}}{-}{13032000}{}{x}{}{{z}}^{{2}}{-}{7894611000}{}{{y}}^{{3}}{+}{1702428300}{}{\mathrm{\alpha }}{}{x}{-}{2617839000}{}{\mathrm{\alpha }}{}{y}{+}{3396141950}{}{\mathrm{\alpha }}{}{z}{+}{579121000}{}{y}{}{x}{-}{80919000}{}{x}{}{z}{+}{14850000}{}{y}{}{z}{-}{80919000}{}{{z}}^{{2}}{+}{92534400}{}{\mathrm{\alpha }}{+}{238838625}{}{x}{+}{22477500}{}{y}{-}{2853557750}{}{z}{+}{159225750}{,}{-}{896}{}{\mathrm{\alpha }}{}{{y}}^{{6}}{-}{736}{}{{y}}^{{6}}{-}{80}{}{\mathrm{\alpha }}{}{{y}}^{{3}}{}{z}{-}{4860}{}{{y}}^{{3}}{}{\mathrm{\alpha }}{-}{2240}{}{z}{}{{y}}^{{3}}{-}{540}{}{\mathrm{\alpha }}{}{x}{}{z}{+}{900}{}{{y}}^{{3}}{-}{1440}{}{\mathrm{\alpha }}{}{x}{+}{300}{}{\mathrm{\alpha }}{}{y}{-}{2880}{}{\mathrm{\alpha }}{}{z}{+}{7610}{}{{x}}^{{2}}{+}{100}{}{x}{}{z}{-}{960}{}{\mathrm{\alpha }}{-}{6075}{}{x}{+}{8400}{}{y}{-}{12150}{}{z}{-}{4050}\right]$ (6) References Amrhein, B.; Gloor, O.; and Kuchlin, W. "On the Walk." Theoretical Comput. Sci., Vol. 187, (1997): 179-202. Collart, S.; Kalkbrener, M.; and Mall, D. "Converting Bases with the Grobner Walk." J. Symbolic Comput., Vol. 3, No. 4, (1997): 465-469. Tran, Q.N. "A Fast Algorithm for Grobner Basis Conversion and Its Applications." J. Symbolic Comput., Vol. 30, (2000): 451-467.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 14, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8883253931999207, "perplexity": 2327.5663786885943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710902.80/warc/CC-MAIN-20221202114800-20221202144800-00320.warc.gz"}
https://jqi.umd.edu/research/publications?amp%3Bf%5Bauthor%5D=72&s=type&o=desc&f%5Bauthor%5D=7386
# Publications Export 430 results: Author Title [ Type] Year Filters: Author is S. Das Sarma  [Clear All Filters] Journal Article Zero-bias conductance peak in Majorana wires made of semiconductor/superconductor hybrid structures, , Phys. Rev. B, 86, 224511 (2012) Wigner supersolid of excitons in electron-hole bilayers, , Phys. Rev. B, 74 (2006) Wiedemann-Franz law and Fermi liquids, , Phys. Rev. B, 99, 085104 (2019) Why Does Graphene Behave as a Weakly Interacting System?, , PHYSICAL REVIEW LETTERS, 113 (2014) Weyl fermions with arbitrary monopoles in magnetic fields: Landau levels, longitudinal magnetotransport, and density-wave ordering, X. Li, B. Roy, and S. Das Sarma , PHYSICAL REVIEW B, 94, 195144 (2016) Variational study of polarons in Bose-Einstein condensates, W. Li, and S. Das Sarma , PHYSICAL REVIEW A, 90 (2014) Valley-dependent many-body effects in two-dimensional semiconductors, , PHYSICAL REVIEW B, 80 (2009) Valley-Based Noise-Resistant Quantum Computation Using Si Quantum Dots, , PHYSICAL REVIEW LETTERS, 108 (2012) Universal spin-triplet superconducting correlations of Majorana fermions, , PHYSICAL REVIEW B, 92, 014513 (2015) Universal optical conductivity of a disordered Weyl semimetal, , SCIENTIFIC REPORTS, 6, 32446 (2016) Universal Conductance Fluctuations in Dirac Materials in the Presence of Long-range Disorder, , PHYSICAL REVIEW LETTERS, 109 (2012) Understanding analog quantum simulation dynamics in coupled ion-trap qubits, , PHYSICAL REVIEW A, 93, 022332 (2016) Uncovering the hidden quantum critical point in disordered massless Dirac and Weyl semimetals, , PHYSICAL REVIEW B, 94, 121107 (2016) Two-dimensional transport and screening in topological insulator surface states, , PHYSICAL REVIEW B, 85 (2012) Two-dimensional surface charge transport in topological insulators, , PHYSICAL REVIEW B, 82 (2010)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8452417254447937, "perplexity": 10885.010752988663}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703517966.39/warc/CC-MAIN-20210119042046-20210119072046-00511.warc.gz"}
https://socratic.org/questions/how-do-i-solve-25x-2-20x-11-by-completing-the-square
Precalculus Topics # How do I solve 25x^2 - 20x = 11 by completing the square? Sep 13, 2014 Completing the square is method of solving a quadratic equation that involves finding a value to add to both the left and right side of the equation. This value has the extra benefit of making one side of the equation a perfect trinomial which makes the function easier to identify and/or graph. Let's begin by factoring $25$ from the left side of the equation. $25 \left({x}^{2} - \frac{20}{25} x\right) = 11$ Take the coefficient of the $x$ term and divide it by $2$ and square it ${\left(\frac{- \frac{20}{25}}{2}\right)}^{2} = {\left(- \frac{20}{25} \cdot \frac{1}{2}\right)}^{2} = {\left(- \frac{10}{25}\right)}^{2} = \frac{100}{625}$ $\frac{100}{625}$ This the number you add to the left side $25 \left(\frac{100}{625}\right)$ Is added to the right side because we initially factored out 25 from the left side. We added these values but the equation remains balanced because they are added to both sides of the equation. $25 \left({x}^{2} - \frac{20}{25} x + \frac{100}{625}\right) = 11 + 25 \left(\frac{100}{625}\right)$ $25 \left({x}^{2} - \frac{20}{25} x + \frac{100}{625}\right) = 11 + \frac{100}{25}$ $25 \left({x}^{2} - \frac{20}{25} x + \frac{100}{625}\right) = 11 + 4$ $25 \left({x}^{2} - \frac{20}{25} x + \frac{100}{625}\right) = 15$ $\left({x}^{2} - \frac{20}{25} x + \frac{100}{625}\right) = \frac{15}{25}$ ${\left(x - \frac{10}{25}\right)}^{2} = \frac{15}{25}$ $\sqrt{{\left(x - \frac{10}{25}\right)}^{2}} = \sqrt{\frac{15}{25}}$ $\left(x - \frac{10}{25}\right) = \sqrt{\frac{15}{25}}$ $x = \sqrt{\frac{15}{25}} + \frac{10}{25}$ $x = \frac{\sqrt{15}}{5} + \frac{10}{25}$ $x = \frac{\sqrt{15}}{5} + \frac{2}{5}$ $x = \frac{\sqrt{15} + 2}{5}$ ##### Impact of this question 549 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 19, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7096603512763977, "perplexity": 278.8223941958988}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347390448.11/warc/CC-MAIN-20200526050333-20200526080333-00221.warc.gz"}
http://tris.nmpcare.ru/9568/658
# 0.95 as a fraction ## Fraction Add: ubykygeb79 - Date: 2020-12-13 18:49:08 - Views: 2367 - Clicks: 8424 Practice what you learn with games and quizzes. 1 was released on January 29th. Wir folgen dem Weg Richtung Feldringalm bis zur Abzweigung zum Winterwanderweg. NominateurD&233;nominateur = Nombre avec d&233;cimal. Gedreht wurde in Miami und anderenorts in den USA. · The Nikon Nikkor Z 58mm f/0. This has huge (in my opinion negative) impact on stopped down bokeh as out of focus highlights will be rendered as 11 sided polygons instead of circles. The NIKKOR Z 58mm f/0. Mitakon 85mm f/2. So let me write that down. 42%) Mon,, 4:35AM EST. The HUAWEI Watch Face Store* offers various watch faces in different styles, such as sports, cartoons, intelligence and hi-technology. Fraction (numerator=0, denominator=1) &182; class fractions. That is called Simplifying, or Reducing the Fraction Numerator / Denominator. Our solution is simple, and easy to understand, so dont hesitate to use it as a solution of your homework. Your definition is - of or relating to you or yourself or yourselves especially as possessor or possessors, agent or agents, or object or objects of an action. Vor 2 Tagen &0183;&32;The NIKKOR Z 58mm f/0. 18 - Lookup IP Address and Location - email protected 7. Light falloff Wide open there is light falloff of roughly 2. Fractions &233;gales Deux fractions sont &233;gales si l’on passe de l’une &224; l’autre en multipliant ou en divisant le num&233;rateur et le. Une fraction correspond &224; un nombre, entier ou d&233;cimal, &233;crit sous la forme d'une division:. Technical Assistance is encouraged — please email org to set up a technical assistance session with one of our Board members. La notion de partage ainsi que ma comparaison sur une droite gradu&233;e en sixi&232;me. It also is slightly wobbly and makes a scratchy noise. BYJU’S provides students with maths concepts as well so that students can practice and solve problems easily. Historical returns are no guarantee for future returns. Je ne suis plus tr&232;s au point sur ce th&232;me donc j’aimerai pouvoir faire l’&233;valuation &224; ma fille en la corrigeant correctement. Une fraction se compose d'un num&233;rateur et d'un d&233;nominateur s&233;par&233;s d'une barre. Comme on partage le segment compris entre 2 et 3 en cinq parts &233;gales. · The Mitakon Speedmaster 17mm f/0. 2. Blog Read Mr. A fraction is made up of a numerator and a denominator separated by a line. Usually written as two numbers separated by a horizontal or diagonal line, fractions are also used to indicate a part of a whole number or a ratio between two numbers. Featuring a stunning 0. See full list on mathsisfun. Solution: 3 /5 + 10/15 LCM of is 15 =/15 = 19/15 A lot of word problems on fractions appear in our textbooks to compare fractions, multiply mixed numbers and improper fractions, decimals, and fractions. Object moved to here. , About ClamWin Free Antivirus. Musicalement, le style de Fraction est donc tr&232;s orient&233; punk hard-core avec des sonorit&233;s metal. 1 Inverse d’une fraction. Just add the numerator, keeping the denominator same. Gratis Windows Antivirus. Fraction n noun: Refers to person, place, thing, quality, etc. When talking about a 50mm f/0. In addition, the Leica engineers added aspherical lenses and also floating elements, to keep the performance on a high level at close focus distances. 95 III was kindly provided free of charge by the manufacturer for reviewing purpose. The focus throw is 120° from 0. Maintained by Kenichi Ishigaki If you find anything, submit it on GitHub. Stella is a multi-platform Atari 2600 VCS emulator. It sits a bit loose, as was already the case with the MK II version of this lens. Maybe you would like to learn more about one of these? Com, a free online dictionary with pronunciation, synonyms and translation. 95 S Noct is a 58mm manual focus lens with a fast f/0. The number below the line is called the denominator. We have to willingly or unwillingly share that yummy pizza amongst our friends and families. A common, vulgar, or simple fraction (examples:. 95 Patch-01 is a modification for Command & Conquer: Generals - Zero Hour, a(n) strategy game. Check how easy it is, and learn it for the future. Simply, a fraction is used to indicate a part of a whole. So let&39;s say that I have this square. Commentaire sur la fraction en sixi&232;me G. We call the top number the Numerator, it is the number of parts we have. 3&176;) Fraction d&233;cimale et nombre d&233;cimal &187; Soit la fraction d&233;cimale :, on a =. Post&233; par. This is a fraction calculator with steps shown in the solution. So, the fraction of boys is two-fifths ( 2⁄5). Astrophotographers, in particular, will like this lens, regardless of. Tu peux consulter les &233;nonc&233;s avant de t&233;l&233;charger les corrections. More Fraction videos. Buy TTArtisan 50mm f/0. 1x - 5x Super Macro. · Ejection fraction is a measure of how well the heart is pumping blood around the body. A = a:b b Exemple: 1 / 4 1 est le num&233;rateur 4 est le d&233;nominateur. In collaboration with an extraordinary partner, it not only questions established limits, but it goes further: it redefines them. Improper fractions 3. ) non effectu&233;e entre deux nombres entiers relatifs n. For example, in the fraction. Yes, It is a proper fraction because the numerator 5 in this fraction is less than the denominator 9. 0 lassen sich &252;berfl&252;ssige Daten vom PC entfernen, sei es zur Freigabe von Speicherplatz oder um zum. Liste de synonymes pour fraction. It is an improper fraction. Neither infinity nor the minimum focus distance is where most will be using this lens, so I will talk about my experiences with this lens on a 42mp sensor in the field a bit. The Nikon Nikkor Z 58mm f/0. Sections of this page. Lens which reveals details that are hidden to the human eye. 95 is a compact prime characterized by its bright f/0. The Nikon 58mm f/0. Rewrite each of the following fractions as a whole number. After I found out about this I told the manufacturer that this is a seriously bad idea for a portrait lens and I got told that for further production runs they consider changing back to rounded blades. The Fractions app lets students use a bar or circle to represent, compare, and perform operations with fractions with denominators from 1 to 100. Using fractions. . Bienvenue sur La fiche d'exercices de maths Simplification de Fractions (Faciles) (A) de la page d&233;di&233;e aux Fiches d'Exercices sur les Fractions de MathsLibres. No, It is not a proper fraction because the numerator 6 in this fraction is more than the denominator 5. V&233;ronique Ricarde, vous. Specialties: Delicious burgers, fries, fried zucchini and onion rings. Com has been visited by 1M+ users in the past month. 2 Exercice: division de deux fractions; 2. 95 is probably the most anticipated lens by TTArtisan so far, as – at least on paper – it rivals the famous yet unobtainably expensive Leica 50mm 0. You can quickly add fractions with the same denominator. Addon Get our in-game addon for quickly loading your character, managing gear, and more. 95 maximum aperture, the Mitakon 17mm f/0. Filter Diameter: 67 mm 1. 3', Filter Thread: 60mm, Built-In Extendable Lens Hood. Bright and sleek, the silver Speedmaster 17mm f/0. There is a 2. | Information published on ASRock. S Print - fast, Cost efficient, High quality printing with a digital result that won’t be rivaled. The situation has been improved, but the light spill is still there close to the corners. Mechanical vignetting at f/0. What is "0 95? Get unstuck. 25 Hence, on adding 0. 95 standard deviation to the right of the mean on a bell curve. Dictionnaire des synonymes. 95 Noctilux M Aspherical Manual Focus Lens represents a unique high performance lens. 95 lens the bokeh is the most important factor to me. Founded in the. The most notable is that Zhong Yi decided to use 11 straight aperture blades instead of 9 rounded ones. Unfortunately flare resistance is still pretty bad. We can multiply fractions, divide fractions, divide fractions by fractions and so much more. &92;(&92;frac14&92;) is the same as &92;(1 &92;div 4. 0 EV and further improves to 0. Fraction a &233;t&233; cr&233;&233; &224; Nice en 1994. Emissivity is a measure of the efficiency in which a surface emits thermal energy. The sunstars are quite okay between f/2. The numerator (8) is less than the denominator (9), so this fraction is a proper fraction. 7000円 おしゃれ イエローゴールド アメジスト 人気 金属アレルギー対応 セカンドピアス レディース 0. Mais comme cela ne fonctionne pas dans les navigateurs de Microsoft (y. 8 mn ter 1 40 8 cm vws -0. Solution: Yes, it is. 1: 1/4 + 2/4 is our equation. Simplifiez la fraction &224; sa plus petite forme. After I had been using the MK II version for some time the front barrel got a bit loose. 95 produces obvious coma and vignetting, however, there is barely any noticeable color fringing and chromatic aberrations are under control. Our parent educators use an evidence-based home visiting model with parents and caregivers during a child’s earliest years in life, from prenatal through kindergarten. 95 on a bell curve? 95 is an homage to the legendary Leica Noctilux - M 50 mm f/0. Il sera tenu compte de la r&233;daction et de la propret&233;. F&252;r die gro&223;en Momente! Comment convertir une fraction en d&233;cimal. Maximum Magnification: 1:7. Fraction nf nom f&233;minin: s'utilise avec les. Know that a fraction is a way of indicating parts of a whole. Shine through low light and control depth of field like never before. Cette fiche d'exercices de math&233;matiques a &233;t&233; cr&233;&233;eet a &233;t&233; visionn&233;e 1,037 fois cette semaine et 3,597 fois ce mois-ci. Ecriture d'une fraction en nombres d&233;cimaux. 95 has been released and is now here at Moddb for your downloading pleasure. Choose the fraction model and number of equal parts. Compagnie Fraction. Consulta la clasificaci&243;n de los equipos de la LaLiga SmartBank /, todos los datos de la LaLiga SmartBank / en AS. Exemple: 1/2=0,5 1/4= 0,25 1/5= 0,20 3/4= 0,75 1/10=0,1. Il suffit d’indiquer dans l’outil en haut le nombre d&233;cimal que vous voulez convertir (rappelez-vous : s&233;parez le nombre entier du nombre d&233;cimal en utilisant un point) et cliquez sur le bouton &171; Convertir en fraction &187;. We would like to show you a description here but the site won’t allow us. Simple fraction synonyms, simple fraction pronunciation, simple fraction translation, English dictionary definition of simple fraction. On convient d’&233;crire la fraction d&233;cimale sous la forme d’un nombre d&233;cimal : = 8, 345. Les petites &233;toiles t’indiqueront le niveau de difficult&233; de chaque exercice (5 &233;toiles pour un exercice difficile et 1 &233;toile. Une fiche pour un “rituel” type nombre du jour : Un fiches “m&233;mo” Des coloriages magiques sur les fractions : Un jeu autour des fractions; Des fractions vers les d&233;cimaux; une s&233;quence : par ici; Merci &224; Pascale pour la mise en place de ce rallye! Fraction and whole number division in contexts Get 3 of 4 questions to level up! Mit den Hypotheken von PostFinance werden Ihre Tr&228;ume Wirklichkeit. Fractions on a number line: Fractions can be represented on a number line, as shown below. Lens (Black) featuring Leica M-Mount Lens, Aperture Range: f/0. 32 to fraction: 0. Ferociously expensive and without the benefits of. The top number, called the numerator, represents the number of parts you&39;re working with. This method involves cross multiplication of the fractions. Ce mod&232;le utilise la barre de fraction U+, qui am&232;ne une mise en forme automatique avec des chiffres normaux d’un certain nombre de polices OpenType (Calibri, Candara, Corbel, Constantia, Palatino Linotype, mais pas Times New Roman ni Arial) dans les logiciels int&233;grant HarfBuzz : LibreOffice, Chrome, Firefox. NOTICE: Due to the COVID-19 pandemic, ASI is going to allow incoming mail to remain untouched and unopen for three (3) days to ensure the safety of our employees opening the mail. For example, Fraction 1/2 can be written in decimal form as 0. Exact Audio Copy is a so called audio grabber for CDs using standard CD and DVD-ROM drives. Mise en sc&232;ne Jean-Fran&231;ois Matignon avec David Arribe Thomas Rousselot prochainement au Th&233;&226;tre del'&201;p&233;e de bois. That is called Simplifying, or Reducing the Fraction. Fraction B 1 b). It needs stopping down to f/2. See the formulas below. Qui tend &224; cr&233;er une fraction dans un parti politique (cf. This is not a lens I would recommend for astrophotography at wider apertures. Toute fraction peut s'&233;crire comme une somme de fractions &233;gyptiennes, et l'algorithme de Fibonacci permet d'en trouver un exemple &224; partir de la fraction. If you require assistance in making this move, please ask on the forums or contact your local Red Hat account. 该平台由 河南安盛科技股份有限公司 提供技术支持 食品业务:技术支持:. 25, we get; 0. Or how will you solve 6/12- 3/27? What we&39;re going to talk about in this video is the idea of a fraction. "Handy" is an Atari Lynx Emulator for Windows 95/98/NT//XP. Fractions - Topics. Also, no matter where you place the sun in the frame you will pretty much always find ghosts somewhere in the frame. The old one is significantly worse at f/0. A number usually expressed as 1/2, 1/4, etc. We operate a state-of-the-art late model fleet with the latest technology for tracking, storing and transporting goods. The American Speech-Language-Hearing Association (ASHA) is the national professional, scientific, and credentialing association for 211,000 members and affiliates who are audiologists; speech-language pathologists; speech, language, and hearing scientists; audiology and speech-language pathology support personnel; and students. MARTIELOn ajoute le sens &171; fraction quotient &187; au sens &171; fraction partage &187; d&233;j&224; vu en primaire Cela implique: - Faire le lien entre &171; fraction partage &187; et &171; fraction quotient &187; - R&233;aliser que la fraction est un nombre. Business cards, Stationery, Copying & Scanning, Signage, Banners, Leaflets. Informaci&243;n general, tramites, servicios, agenda, mapas y aplicaciones. Future returns will depend, inter alia, on market developments, the fund manager's skill, the fund's risk profile and management fees. Here is a Bell Curve so you can visualize where 0. 1 Faites des exercices pour apprendre &224; calculer des inverses; 2 Division de fractions. Com, the world&39;s most trusted free thesaurus. Alia re : &233;quation avec fraction &233;gal &224;&224; 17:20. Three people, four slices. Subtract Fractions 2. + Courses In 16 Categories. Conjugaison fraction. Classement par ordre. · That&39;s where lenses like the Voigtlander 60mm f/0. Established in Hong Kong in 1841, A. Offering a natural 50mm perspective along with an ultra-fast f/0. Comment Simplifier Une Fraction Avec Calcul Fraction en ligne Simplifier: Vous entrez simplement les deux valeurs dans la fraction ci-dessus calcul&233;e; Et, appuyez sur le bouton calculer pour obtenir la valeur de fraction de simplification; Comment convertir une fraction en d&233;cimal avec la calculatrice de fraction en d&233;cimal: Il vous suffit d’entrer la valeur de la fraction dans la. Exercice 1 Ecrire une fraction &233;gale &224;. Login to FastNet internet banking or visit us in branch today. I’m also impressed by the wide aperture, which will allow for unprecedented shooting in low light and at night. Zhongyi Optics (ZY Optics) have released the new Mitakon Speedmaster 50mm f/0. How do you write an integer as a fraction? Je sais que \over 5$c'est + 1 \over 5$, donc une graduation apr&232;s 2. ASKinard, A&S Services Group LLC is the transportation solution for all of your surface transportation, logistic, Intermodal Transportation and warehouse needs. 76: OUT OF STOCK. Exemples : ou 10/5 : se lit dix cinqui&232;me le 10 est appel&233; num&233;rateur ; le 5 est appel&233; d&233;nominateur - la barre de fraction correspond &224; la division correspond &224; la division de 10 par 5. In mathematics, a fraction is a number that represents a part of a whole. If the Denominator is the same, adding and subtracting fractions is an easy task. Book your Hotel in Fratíon online. 95 from Mitakon Zhongyi is a unique wide-angle prime for Micro Four Thirds mirrorless cameras. Fraction: a broken or irregular part of something that often remains incomplete. Download Termux 0. &0183;&32;Tu as fraction volumique= V/somme(V)=n x Vm/somme (n x Vm) Or, tous les gaz ont le meme Vm, tu peux donc simplifier la derniere relation par Vm (en mettant le Vm du d&233;nominateur en facteur) et tu retombes sur Fraction volumique= n/ somme (n)=fraction molaire. Toolkit for UNIX systems released under GPL. The most common examples of fractions from real life are equal slices of pizza, fruit, cake, a bar of chocolate, etc. D’embl&233;e, le groupe opte pour un message radical et contestataire. Fraction a number usually expressed as 1/2, 1/4, etc. Une fraction est un quotient du num&233;rateur par le d&233;nominateur. Que l’on peut &233;crire =. It allows you to play all of your favorite Atari 2600 games again! The heart contracts and relaxes when it beats. (arithmetic) A ratio of two numbers, the numerator and the denominator, usually written one above the other and separated by a horizontal bar. It is highly recommended that you upgrade to WildFly or JBoss EAP at your earliest convenience. How do you calculate fractions on a calculator? In the center of the frame almost every lens will render a perfect circle, but only lenses with very low mechanical vignetting will keep this shape in the corners. 95 aperture to suit working in low-light conditions. 1 EV, stopped down to f/1. Lens and its maximum aperture of 0. Find more ways to say fraction, along with related words, antonyms and example phrases at Thesaurus. Grossesse m&244;laire. Propri&233;t&233; Un nombre en &233;criture fractionnaire ne change pas si l’on multiplie (ou on divise) le num&233;rateur et le d&233;nominateur par le m&234;me nombre. Find out more about us. The numerator (75) is greater than the denominator (51), so this fraction is an improper fraction. 5th" and "0. En math&233;matiques, expression num&233;rique indiquant le nombre de parties &233;gales d'une unit&233; donn&233;e auquel un &233;l&233;ment est &233;quivalent :. Les synonymes du mot fraction pr&233;sent&233;s sur ce site sont &233;dit&233;s par l’&233;quipe &233;ditoriale de synonymo. Une fraction d&233;signe empiriquement une partie d’un tout exprim&233;e sous la forme d’un rapport de deux nombres entiers positifs a et b, avec a < b. TTArtisan 50mm 0. Dem Integral der Dichtefunktion. Niquel, c'est ce que j'attendais comme explication. 95 APK - Termux is a powerful terminal emulator for the Android operating systems that also brings some Linux packages. 95 7 cm optima c i2 20 scho. Our evidenc. La minorit&233; s'efforce de constituer dans toutes les entreprises des comit&233;s syndicalistes r&233;volutionnaires, &224; la fois groupes fractionnels et futurs. Downloads 78193. The older version I am using in the comparisons I bought myself from the German retailer in and have been using it since. Identify a proper fraction, improper fraction, mixed number, unit fraction, like fractions, unlike fractions like a pro with these printable types of fractions worksheets. 95 US$399. English 中文 Copyright &169; AS international co. 1 released. Fraction (third-person singular simple present fractions, present participle fractioning, simple past and past participle fractioned) 1. 95 inch AMOLED touchscreen of great colour and contrast, HUAWEI Band 4 Pro provides you an ultimate visual experience. Additional. · 7artisans announced a new 35mm f/0. Une telle fraction s'appelle une fraction irr&233;ductible (qu'on ne peut pas r&233;duire). I don’t want to anticipate the conclusion right at the beginning, but if you don’t want to use this lens at f/0. I had occasion. 95 is an homage to the Leica Noctilux - M 50 mm f/0. Fraction molaire et masse molaire. A fraction has two parts. Adding fractions is a very handy skill to know. A fraction represents number of equal parts of a whole. Voici la m&233;thode pour convertir une fraction en d&233;cimal ou l'inverse. · The Nikon 58mm f/0. &0183;&32;MSFT | Complete Microsoft Corp. Dann geht es durch wenig steiles Gel&228;nde weiter bis zum Gipfel. Retrouver la d&233;finition du mot fraction avec le Larousse. A small part of something, or a small amount: Although sexual and violent crimes have increased by 13 percent, they remain only a tiny / small fraction of the total number of crimes committed each year. It’s the percentage of blood that leaves your left ventricle when your heart. 1992, Rudolf Mathias Schuster, The Hepaticae and Anthocerotae of North America: East of the Hundredth Meridian, volume V, New York, N. Mixing a medium-telephoto focal length with an exceptionally fast design, the Voigtlander Nokton 60mm f/0. 95" Color Screen: Bigger screen displays a comprehensive and obvious activity information; Simply lift your wrist to view directly the time and tap the button for steps, heart rate and other activity information; Also it can not only remind you via vibrating when your phone receives calls, messages or other notifications such as from. 43 stops darker than the centre of the frame and illumination is visually uniform with the aperture stopped down to f/5. Visit UK Antiques Fairs at Lincoln Showground, Donington Park, Ripley Castle, Tatton Park, Loseley Park and Cheshire Showground. See full list on calculator. The numerator represents the number of equal parts of a whole, while the denominator is the total number of parts that make up said whole. 8 200 sruk' to 120 _oilbau -0. Bienvenue sur notre page pour faire des exercices et savoir parfaitement comment simplifier une fraction! Vous pouvez l'imprimer, la t&233;l&233;charger, ou la sauvegarder et l'utiliser dans votre. Furthermore a few more code cleanups slipped in. 95: 10-49: . Superimpose fractions upon each other to compare fractions or see equal parts. When adding and subtracting fractions the denominators must be the same. Mit der MSI Gaming App k&246;nnen Profile und LED-Effekte von Grafikkarten gewechselt werden. Look it up now! Reveal or hide numeric labels as needed. At one point, this aperture would have been a standout feature but nowadays,. You can correct this in Lightroom, but there is no profile yet. Luckily I got the chance to review one of the early production models, so let us find out if this is a worthy update! Where is 0. For K-12 kids, teachers and parents. D&233;finition fraction. Parfait, merci beaucoup! Mechanical vignetting Very fast lenses usually show a significant amount of mechanical vignetting. Unlike fractions 5. Our solution is simple, and easy to understand, so. 563%) and 5,13-docosadienoic acid (C22H40O2, 13. Based on the properties of numerator and denominator, fractions are sub-divided into different types. These 2/3" lenses feature an extremely high relative aperture of 0. Number of Aperture Blades: 11 (straight) 1. This free fraction calculator supports fraction addition, subtraction, multiplication and division. Exemple de calcul de fraction &224; d&233;cimal. From working with a number line to comparing fraction quantities, converting mixed numbers, and even using fractions in addition and subtraction problems, the fractions games below introduce your students to their next math challenge as they play to rack up points and win the game. I was really hoping for big improvements here. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Glue advice and picker, because RCers have a need to glue things. This lens wouldn’t be my first choice for landscape/architecture photography. Great rates. Specialties: The most spectacular views in San Diego. The fraction calculator will simply the answer for you. 240) (Encontrar seu endere&231;o IP e localiza&231;&227;o). Explore & Shop today! They were necessary to fix a few issues like ‘make dist’ not compiling. Fraction n. . 95 lens in 1959. Il ne reste plus qu’&224; placer. 95 S Noct lens comes with its own Pelicase in the box to house it, which is an indicator both of how heavy and large it is, as well as hinting at the fact that it’s a premium product. On planetorganic. Watson Group is the world’s largest international health and beauty retailer with 16,000 stores in 27 markets. Fraction A mathematical expression representing the division of one whole number by another. In statistics, the coefficient of determination, denoted R 2 or r 2 and pronounced "R squared", is the proportion of the variance in the dependent variable that is predictable from the independent variable(s). Whereas it is difficult sometimes to perform operations on fractions. Length: 85mm 1. Read on for more information about adding fractions. But first, we&39;ll think about the most fundamental. File type Game mod. &201;crire un nombre comme la fraction 100 est une autre fa&231;on simple d'&233;crire des pourcentages. Use a color to select specific parts to show a fraction of the whole. 95 equation. QTY: DISCOUNT: 1-9: . Il suffit de diviser le num&233;rateur par le d&233;nominateur pour ainsi obtenir le nombre avec d&233;cimal. About Press Copyright Contact us Creators Advertise Developers Terms Privacy Policy & Safety How YouTube works Test new features Press Copyright Contact us Creators. 1898, Winston Churchill, chapter 2, in The Celebrity: 5. A lire &233;galement la d&233;finition du terme fraction sur le ptidico. J'accepte de recevoir gratuitement par email des informations, offres et services de l'ASM Clermont-Auvergne et de l'ensemble de ses activit&233;s (Boutique, Billetterie, ASM Experience, Fan ASM, Hospitalit&233;, ASM Events, Sportives, Institutionnelles). How to use fraction in a sentence. Portrait distance. It is a complete square. 95 - A BRAND OF LEICA CAMERA AG. &169; ASRock Inc. The first one (58mm filter thread. From the comparison above you see that slightly stopped down it is much better compared to f/0. 95 - A BRAND OF LEICA CAMERA AG Uniting the senses of sight and sound: 0. An aliquot portion or any portion. 517%) were the major content of fraction 4. 95, calculate the right-tailed and left-tailed critical value for Z Calculate right-tailed value: Since α = 0. &224; la fin de ton calcul tu &233;cris or -3 est une valeur interdite donc l'&233;quation admet une unique solution -7/2. A fraction in which both the numerator and the denominator are whole numbers. Ah oui, donc la solution n'est que 7/2 dans ce cas puisque -3 est une valeur interdite. Dans le cas o&249; a ou b sont des nombres d&233;cimaux, on parlera de fraction d&233;cimale. Another word for fraction. Donc = = d'apr&232;s ce qu'on a d&233;j&224; vu. Sant&233;. Let us take an example to understand; Example: Add 1/6 and 1/4. Abracadabra re : &233;quation avec fraction &233;gal &224;&224; 17:22. Common Denominator (They both work nicely, use the one you prefer. 8 1-5X Super Macro (Canon EF, Nikon F, Sony E, Pentax K, MFT, Fuji X, EOS-M) Order Now. Compared to: Zhong Yi Mitakon 50mm 0. It also includes three new Generals (! Download for free. 95 means that your statistic is 0. Not only is it an important part of school — from elementary school all the way up to high school — it&39;s also a really practical skill to know. The Voigtlander Nokton 25mm f/0. So in the following comparison we move from the center (left) to the extreme corner (right) and see how the shape of the light circle changes. Nous sommes donc partis de la diff&233;rence entre une fracture et une fraction ( une feuille d&233;chir&233;e en deux morceaux(fracture), puis la m&234;me feuille, bien pli&233;e pour obtenir 2 parts &233;gales( fraction). How do you divide a fraction by a fraction? Partie d'un tout : Une fraction de l'assembl&233;e a vot&233; pour lui portion ; totalit&233; fragment 2. 95 Lens for Leica M featuring Leica M-Mount Lens, Aperture Range: f/0. Sur le m&234;me sujet. It is defined as the fraction of energy being emitted relative to that emitted by a thermally black surface (a black body). 95 version III for full frame Sony E, Nikon Z and Canon RF cameras. Fraction excr&233;t&233;e du sodium-----Note : Les unit&233;s utilis&233;es n'ont aucune importance, mais vous devez utiliser la m&234;me unit&233; pour les sodium plasmatique et urinaire, et la m&234;me unit&233; pour les cr&233;atinines plasmatique et urinaire. Powered by Wolfram|Alpha. 86: 50+ . 95 lens cannot be made into Nikon version unfortunately. This is adding and subtracting fractions with unlike denominators. Hello Friends,Check out our new video on "What is Fraction? Sitio oficial del Gobierno de la Ciudad de Buenos Aires, Argentina. A voir en vid&233;o sur Futura. Suppose a number has to be divided into four parts, then it is represented as x/4. Support Get help with account-related or technical issues. A&S primarily serves the Northeast and mid-Atlantic United States; view our coverage map for more details. 95 to f/16, One Double-Sided Aspherical Element, Two ED Elements, Eight HR Elements, Manual Focus Operation, Minimum Focusing Distance: 2. Bonnes visites et n. Field of view: 47° (diagonally) 1. The MK II version, when used wide open, showed a slight tangential „light spill“ on off center out of focus highlights, giving them a lemon shape. Last update Friday, J. Description: Fraction Workshop is an amazing drag and drop application that allows students to complete any kind of fraction operation in an online stage with tools to help them. It may come as a surprise, but this is the third incarnation of this lens. Farlex Partner Medical Dictionary. So the fraction here, x/4, defines 1/4th of number x. We used three different distribution tables, and we will give you the 0. 0 might even have a slight edge in the corners. : Columbia University Press, →ISBN, page vii: 1. Yes, it’s a unit fraction. 1,848 Followers, 3,024 Following, 99 Posts - See Instagram photos and videos from 👑KING👑 Description Shockwave 0. NumeratorDenominator You just have to remember those names! Par julienmattys dans le forum Chimie R&233;ponses: 2 Dernier message:. What is Emissivity? 95 III (Sony FE, Canon RF, Nikon Z) Order Now. 279%) and arachidic acid ((C20H40O2, 24. Show off your favorite photos and videos to the world, securely and privately show content to your friends and family, or blog the photos and videos you take with a cameraphone. Established in 1970. Without going too much into technical details mechanical vignetting leads to the truncation of light circles towards the borders of the frame. Tr&228;umen Sie von den eigenen vier W&228;nden? De (IP: 194. 95, the area under the curve is 1 - α → 1 - 0. De fraction &224; d&233;cimal. Herbarium material does not, indeed, allow one to extrapolate safely: what you see is what you get. 95カラット ブランド 誕生石 ピアス シルバー925 ジュエリー・アクセサリー レディースジュエリー・アクセサリー ピアス 紫 おしゃれ 天然石 女性 加工 レディース 天然 アメシスト 猫 女性. Question 1: Is 12/6 a fraction? Fractions represent equal parts of a whole or a collection. And sometimes that confusion extends throughout the entirety of elementary school, where an initial introduction to concepts like numerators and denominators is followed by comparing fractions, adding and subtracting fractions, multiplying fractions, simplifying fractions and so on. Wide open center resolution is okay but there is certainly a bit of glow (spherical aberration). 7 m to infinity. Par souci de simplicit&233;, trouvez le nombre le plus &233;lev&233; qui entre. 95 Aperture Noct is designed to be shot wide open. OUR SIGNATURE PRODUCTS (NEW) Mitakon Speedmaster 17mm f/0. I unscrewed the rear of the lens and tightened some internal screws, and it worked as new again. 95 aperture to suit working in low. |古道具坂田|mon sakata|美術館 as it is|. 95 II, but there was definitely room for improvement in some areas, so I was curious to find out if those have been addressed in this redesign. &0183;&32;A few months back I decided that I wanted to pull the trigger on the Mitakon 35mm f/0. 95" are correct in the preceding expressions. The focus ring has a nice resistance and it takes about 300° from Infinity to 0. Due to the large size of the rear element, the new 50mm f/0. 1200 = 130. Von schnellen Servern. 620%) in fraction 3, however erucic acid (C22H42O2, 40. Il y a 2 jours &0183;&32;A Fraction instance can be constructed from a pair of integers, from another rational number, or from a string. Note that "97. The numerator is the number on top of the fraction. Uniting the senses of sight and sound: 0. Ma&223;nahmen Coronavirus. Fraction of a whole: When we divide a whole into equal parts, each part is a fraction of the whole. "1" and "2" are the numerators. 95 aperture, ever since Canon introduced their “Canon Dream” 50mm f/0. Write the decimal fraction as a fraction of the digits to the right of the decimal period (numerator) and a power of 10 (denominator). :| Arthur Swallow Fairs. ASB Bank offers mortgage, KiwiSaver, foreign exchange, loans, insurance, credit cards, accounts, business & investment products to help with your banking needs. Like fractions are fractions that have the same denominator. The Mitakon Speedmaster 17mm f/0. 95 is on a bell curve. The major contained fatty acids were 11-ecosenic acid (C20H38O2, 20. (If you forget just think "Down"-ominator). This is a whole. Une fraction d&233;cimale. Press alt + / to open this menu alt + / to open this menu. Hearty breakfast burritos, tacos, and Mexican food. 95 Z Score probability, percentile, and explanations for all three. Comment fonctionne le convertisseur de nombres d&233;cimaux en fractions. If you go closer (head shot distance) the lens is noticeably softer wide open and I found it to look much nicer between f/1. These values are slightly lower (better) compared to the competition in this class. · TTArtisan 50mm 0. Authorized Conversion Original Creator : 6e66o and friends Corverted to Assetto Corsa by: Tiago Lima Full Grid/ Garages 20 Pit boxes Hotlap working sectors working Thanks to : Mitja Bonca for beta testing and the ai's mrk37 for letting me. Pour placer la fraction \over 5$ sur un axe gradu&233;. 95 maximum aperture, the Mitakon 50mm f/0. Le nombre de pour cent devient le num&233;rateur de la fraction, tandis que 100 est le d&233;nominateur. Try Chrome, Google’s fast modern browser, to get all of the features of Toolbar and more. 95 is an attitude that unites the fine art of precision engineering, quality of materials, and enduring value in exclusive collections that meet the greatest demands. 18 - ค้นหาที่อยู่ IP และที่ตั้ง - email protected 7. The latest update, introduced in, brought the aperture speed down to f/0. F&252;r ausgew&228;hlte Freiheitsgrade (df) und Wahrscheinlichkeiten (1-a) werden die entsprechenden t-Werte (t-Quantile) dargestellt, f&252;r die gilt: W(T&163;t|df) = (1-a). Mount: Sony-E Supply is still somewhat limited, but you can often find this lens on ebay. Fractions represent equal partsof a whole or a collection. A fraction means a part of something or a number of parts of something. Rational and returns a new Fraction instance with value numerator/denominator. A part of a whole, especially a comparatively small part. 05 Our critical z value is -1. (chemistry) A component of a mixture, separated by fractionation. Free Online Integral Calculator allows you to solve definite and indefinite integration problems. Fraction (float) class fractions. If you need to simplify fractions, this fraction calculator can do the work for you by entering a regular fraction, mixed fraction or improper fraction then multiply the value by one. 95 includes over 100 new units and a huge amount of new additions and fixes. Please enter your details below and we will send you an email when this item is back in stock. 95 mk2 for the Fujifilm system. With fresh material, taxonomic conclusions are leavened by recognition that the material examined reflects the site it occupied; a herbarium packet gives one only a small fraction of the data desirable for sound conclusions. 10 = 2 remainder 4 ). Fraction definition is - a numerical representation (such as 3/4, 5/8, or 3. Synonyms of fraction a broken or irregular part of something that often remains incomplete if even a tiny fraction of that cookie broke off and fell into the delicate watch works, it could mess things up. An open source emulator for Windows and Windows/SDL capable of running titles such as Tempest and Pinball Fantasies. Fraction Workshop - Online. Simplifying Fraction. Additional funding is provided by the Tiger Baron Foundation, The V & L Marx Foundation in Memory of Virginia and Leonard Marx, Lynne and Marc Benioff, and Epstein Teicher Philanthropies. Some fractions may look different, but are really the same, for example: It is usually best to show an answer using the simplest fraction ( 1/2 in this case ). 5% chance that will be less than − and a 2. You can also see that in the extreme corner there is a pretty abrupt drop in sharpness, something similar (but not to this extent) we have seen with the Voigtlander 40mm 1. Dans cette convention : Tout chiffre plac&233; &224; gauche d’un autre repr&233;sente des unit&233;s de. Authentic American breakfast plates including pancakes and eggs made to order. System Tuning Download: Mit dem kostenlosen PrivaZer 4. 2 Nokton E. Slice an apple, and we get fractions. Avec un nom f&233;minin, l'adjectif s'accorde. There are many examples of fractions you will come across in real life. Buy Leica Noctilux-M 50mm f/0. A number that compares part of an object or a set with the whole, especially the quotient of two whole numbers written in the form a/b. Therefore, they are likely to contain bugs and security vulnerabilities. Frac·tion (frak&39;shŭn), 1. Discussions similaires. File size 551. As a verb, to separate into portions. On regarde les graduations qui coupent l'unit&233; en 5 parts (5 parts qui font 1). Weight: 775g (without hood and caps) 1. For most parts of this review the first sample had been used. 4 this improves to 1. Das Rektorat hat die Schlie&223;ung universit&228;rer Geb&228;ude beschlossen. Flare resistance was pretty much the achilles heel of the Mk II version. Home > 메디칼 > 원격지원서비스. Accessibility Help. Fractio, de frangere, briser 1. Virksomheten omfatter bygge- og anleggsoppdrag, asfaltvirksomhet, pukk og grus samt veivedlikehold. Stock Price: AS. Pavadinimas: Alkoholio stiprumo termometras / spiritometras, 0-95‰, 20,5 cm Ilgis: 20,5 cm Skalė: matuoja iki 95‰ stiprumo gėrimus Savybės: - Pagamintas iš stiklo; - Skirtas spirito ir kitų alkoholinių gėrimų stiprumui matuoti; - Naudojamas gaminant stiprią trauktinę, naminę degtinę; - Matu. With a point light source near the corner of the frame most lenses struggle and this one is no exception: And also at night you will often see ghosting with stronger light sources inside and outside the frame: If you need a fast 50mm lens with good flare resistance better have a look at the Voigtlander 50mm 1. Aladin (Originaltitel: Superfantagenio) ist eine Action-Kom&246;die mit Bud Spencer aus dem Jahr 1986 und wurde von Cannon Films produziert. The lens is constructed of 11 elements. The file The End of Days v. Find A Dealer: &169; Copyright L3Harris Technologies, Inc. 8 for acceptable performance and to f/5. Is &92;(&92;frac59&92;) a proper fraction? If you go a bit further away (full body or environmental portrait) I found that a bit of facial detail. 95, something that has not been seen on a 35mm camera since the days of the Canon 7s. 6449 In Microsoft Excel or Google Sheets, you write this function as =NORMSINV(0. Also note that I focused on the corners for these shots, if you focus on the center the corners will look slightly worse. 95 Z Score probability, percentile, and explanations for all. An improper fraction is a fraction where the numerator is larger than the denominator. How do you find the product of a fraction? Jetzt kostenlos herunterladen! Check spelling or type a new query. 95 aperture for spectacular bokeh. · For just 5, you can pick up a 35mm f/0. Answers are fractions in lowest terms or mixed numbers in reduced form. See full list on phillipreeve. Tu trouveras ici notre livre gratuit et t&233;l&233;chargeable de 5 exercices corrig&233;s. , 14h44 9 benji17. In this case it is easy, because we know that 1/4 is the same as 2/8: There are two popular methods to make the denominators the same: 1. A small amount. Weighing in at 2kg, this is a lens which will give your muscles a good work out. That&39;s where lenses like the Voigtlander 60mm f/0. Helstu uppl&253;singar um f&233;lagi&240;, dagstofnanir og annan rekstur, &250;tg&225;fu og starfsf&243;lk. Com: Xiaomi Mi Band 4 Fitness Tracker, Newest 0. 6 EV at f/8. For just 5, you can pick up a 35mm f/0. Fraction (quantity) synonyms, Fraction (quantity) pronunciation, Fraction (quantity) translation, English dictionary definition of Fraction (quantity). Com we deliver our full range of products,. 95 to f/16, Two Aspherical Elements, Five Partial Dispersion Glass Elements, Three High Refractive Index Elements, Floating Elements System, Manual Focus Design, Minimum Focus Distance: 3. The decimals are the numbers expressed in a decimal form which represents fractions, after division. See full list on splashlearn. Fractions show parts of whole numbers, for example, the fraction &92;(&92;frac14&92;) shows a number that is 1 part out of 4, or a quarter. \over 5$correspond donc &224; la premi&232;re graduation. 2 out of 5 are boys. 5&232;me INTERROGATION N&176; 5 (CORRECTION) PAGE 1 Coll&232;ge Roland Dorgel&232;s La calculatrice est interdite. Stay safe and thank you for all that you do! Fractionnel, elle, adj. Technology ED (Extra-Low Dispersion) Glass An optical glass developed by Nikon that is used with normal optical glass in telephoto lenses to obtain optimum correction of chromatic aberrations. With Net10 Wireless, get everything you love about your current network for less. Fraction (other_fraction) class fractions. GNU GPL Free Software Open Source Virus Scanner. Resize canvas Lock/hide mouse pointer Lock/hide mouse pointer. For example, There are total of 5 children. 25 are different ways of representing the same fraction. It is usually best to show an answer using the simplest fraction ( 1 / 2 in this case ). 95 APS-C mirrorless lens for Sony E, Fuji X, Canon M, Nikon Z, and MFT mounts: The 7artisans 35mm f/0. Mister A's is just minutes from Downtown and an eternity from the ordinary dining experience. 95, making them perfectly suitable for low light applications. Dictionnaire Electronique des Synonymes (DES) Derni&232;res Actualit&233;s : Lettre d'actualit&233;s n&176; 10 du DES -- T&233;l&233;rama interviewe le DES. Jean-Fran&231;ois Matignon, metteur en sc&232;ne. 95 Nokton Aspherical (,049) come in. 95 aperture. In a eucharistic service, the breaking of the host. Cette fraction est repr&233;sent&233;e par le symbole $$\dfracab$$, appel&233; &233;criture fractionnaire ou notation fractionnaire. It may be slightly better than the old one, but not so much that it is a really meaningful difference. 95 has been greatly improved compared to the MK II version of this lens. Forum The best place to ask for advice, get help using the site, or just talk about WoW. L’utilisation de la calculatrice de conversion est tr&232;s simple. Find the greatest common divisor (gcd) of the numerator and the denominator. Fractions form an important part of our daily lives. Find local businesses, view maps and get driving directions in Google Maps. 6 for a good one. Solution: 1/6 = 0. Der TestAS ist ein zentraler, standardisierter Studierf&228;higkeitstest f&252;r ausl&228;ndische Studierende. 95 S Noct lens comes complete with the custom Trunk Case. 95 lens for Micro Four Thirds is a beautiful, solidly-built, ultra-fast and all-metal, prime lens. There are two parts to a fraction: The number on top shows how many parts there are; The number on the bottom shows how. Une fraction est une division (La division est une loi de composition qui &224; deux nombres associe le produit du premier par. The 25mm f/0. On &233;crira (c'est plus rapide! A) En parlant d'un ensemble de pers. Over 70% New & Buy It Now; This Is The New eBay. 95 Noctilux. 247 zeptosecondes, la fraction de temps la plus courte jamais mesur&233;e. Like fractions 4. A fraction (from Latin fractus, "broken") represents a part of a whole or, more generally, any number of equal parts. And we&39;ll see there&39;s many ways to think about a fraction. Reduce the fraction by dividing the numerator and the denominator with the gcd. And we can consider this a whole. 3 out of 5 are girls. " | Introduction to Fractions by LetstuteIn this video, we have discussed the following points on. 95 is a manual focus lens with a remarkably low price and an f/0. Unfortunately the first sample showed some onion ring structures in out of focus light sources which the manufacturer told me was only due to a faulty mold of the aspherical element. See also decimal fraction, improper fraction, proper fraction. 95 on Sony A7rII The TTArtisan 50mm 0. 234) indicating the quotient of two numbers. Ejection fraction (EF) measures the amount of blood pumped out of your heart ’s lower chambers, or ventricles. 1200 into fraction. This fraction calculator will automatically simplify results. So I got a second review sample. We did not find results for: as. 1 Th&233;or&232;me : diviser par un nombre revient &224; le multiplier par son inverse; 2. 1200/10000 = 13012/100 Question 3: Add 3/ /15. Google Toolbar is only available for Internet Explorer. This group is strictly for photos taken with or of the Canon 50mm f/0. Flickr is almost certainly the best online photo management and sharing application in the world. It is called an improper fraction. During the contraction, it pushes blood out of large chambers. Visit one of our global sites to learn more about the possibilities in itslearning. Review TTArtisan null. Abonnez-vous pour voir les prochains tuto : Cliquez ici io/xqSsComment simplifier une fraction? First Half Results for the Fiscal Year Ending March, (10,196KB) 年12月01日. The first time kids discover that there’s more to math than whole numbers, they are likely to be a tad confused. Alison©: Allowing Anyone To Study Anything, Anywhere And At Any Time For Free Since. ASI highly encourages you to file claims electronically as this will result in quicker reimbursement for you and safer processing for our employees. 7artisans announced a new 35mm f/0. In available-light photography, it possesses the ability to reveal details that remain hidden to the human eye. 2 days ago · The first version requires that numerator and denominator are instances of numbers. On est aussi par ici! D’une fraction d&233;cimale ce sont donc des nombres rationnels. Retrouver la conjugaison du verbe fraction sur conjugons. Merci et bonne ann&233;e &224; toi. Une fiche d’exercice “somme d’un entier et d’une fraction” Un petit probl&232;me, &224; base de fractions! The number on the top of the line is called the numerator. Finden Sie Ihre passende Finanzierungsl&246;sung. Precision Manual Focusing. I really liked the bokeh oft the old one so I was very curious to find out what they changed about it. Some fractions may look different, but are really the same, for example: It is usually best to show an answer using the simplest fraction (1/2 in this case). Learn, teach, and study with Course Hero. The AS 3959 document is subject to copyright and the only way to buy a copy is to purchase it via the Australian Standards distributer, SAI Global. ; a part as distinct from the whole of anything; portion or section: He received only a fraction of what he was owed. It offers a unique range of features. Performance improves steadily on stopping down, center starts to look really good at f/4. Fractions in maths are painful if you do not grasp the underlying concept behind it. Last analyzed::51:59. On regarde les graduations. 95: Fixed some crashing. Thank you Grant seekers for attending our Workshops Please be reminded our application deadline is December 10th by 5:00 p. Robot's latest theorycraft articles. Rule 8: Any Integer Can Be Written as a Fraction. There is also a very plasticky cheap hood included, which is slightly petal shaped and has felt on the inside. It will become easy if you visualize the number fractions and then solve systematically. I am a big fan of the Zhong Yi Mitakon 50mm 0. So go on ahead and d-load and get out there and play it! In contrast, an infinite continued fraction is an infinite expression. Education and Student Experience provides all the key administration and education support activities at the University. In a finite continued fraction (or terminated continued fraction), the iteration/recursion is terminated after finitely many steps by using an integer in lieu of another continued fraction. 3 Que penser de la r&232;gle : diviser deux fractions entre elles revient &224; diviser les num&233;rateurs entre eux et les d&233;nominateurs entre eux? Reduce the proper fraction, improper fraction and mixed numbers to its lowest term. But what about when the denominators(the bottom numbers) are not the same? Simplify the fractions. Generally it is similar to the old lens – which is good news to me – but there are also a few notable differences. Prestazioni emergenza COVID-19 La QuAS, in considerazione della crescente domanda da parte dei propri iscritti in merito al rimborso dei tamponi antigenici/molecolari e al rimborso di prestazioni erogate in modalit&224; di Telemedicina, ha introdotto nuove misure straordinarie:. On peut sch&233;matiser la situation de la fa&231;on suivante : Les nombres non d&233;cimaux (rationnels ou non rationnels), comme𝜋, √ t, 1 3 ou 17 111, admettent une unique &233;criture d&233;cimale. 95 the corners are 2. They are: 1. Pour la fraction, on fait la division euclidienne de 13 par 5 : Sur l’axe gradu&233;, la fraction sera donc plac&233;e entre les graduations 2 et 3. Simulator Client Run unlimited free simulations and contribute to the global network. The numerator (20) is less than the denominator (23), so this fraction is a proper fraction. Bleiben Sie virenfrei mit freier Software. 8 (measured) 1. Multiply the numerators for 4 x 1 = 4. Input proper or improper fractions, select the math sign and click Calculate. Find Fractions In Math Now! Mit Postern bringen Sie einzigartige Bildmotive ganz gro&223; raus. Ejection fraction is a measure of how well the heart is pumping blood around the body. A fraction represents a numerical value, which defines the parts of a whole. The best part of decimals are they can be easily used for any arithmetic operations such as addition, subtraction, etc. Blog Pssst. Math explained in easy language, plus puzzles, games, quizzes, videos and worksheets. Pour placer \over 5$. To divide or break into fractions. Master & Dynamic has partnered with Leica Camera AG to create a capsule collection of sound tools inspired by the design of the world&39;s fastest aspherical lens - the legendary Leica Noctilux-M 50 mm f/0. First, note that a Z Score of 0. Ex : fille - nf > On dira "la fille" ou "une fille". Provides a scanning daemon intended primarily for mailserver integration, command line scanner for on-demand scanning, and update tool. The very fast maximum aperture is what sets this lens apart from most of the other 50mm lenses. See more videos for 0. Il existe aussi des nombres qui peuvent s’&233;crire sous forme de fraction mais qui ne sont pas des nombres d&233;cimaux, comme 1 3 ou 17 111. ; a part as distinct from the whole of anything; portion or section: He received only a fraction of what he was. Multiplying by 1 in the form of 3/3 turns 1/3 into the equivalent fraction 3/9 fact five Add and Subtract Equal Sized Parts. La lecture de fraction se lit trois demis se lit quatre tiers se lit quatre quarts se lit cinq sixi&232;mes. The bottom number, called the denominator, represents how many parts there are in total. Die folgende Tabelle zeigt ausgew&228;hlte Werte der inversen Verteilungsfunktion der T-Verteilung: T(1-a|df). The second version requires that other_fraction is an instance of numbers. Diameter: 74mm 1. Complete your Fraction collection. 0 Smart Bracelet Heart Rate Monitor 50 Meters Waterproof Bracelet with 135mAh Battery up to 20 Days Activity Tracker: Computers & Accessories. Check what your repair status is today. In either case, all integers in the sequence, other than the first, must be positive. Exemple: 36% passe &224; 36/100. (1-a) entspricht der roten (dunklen) Fl&228;che in der folgenden Abbildung (d. It combines exceptionally high speed with an image performance that ranks alongside that of today's leading lenses and once again extends the composition options of Leica M photography. So, the fraction of girls is three-fifths ( 3⁄5). It is a statistic used in the context of statistical models whose main purpose is either the prediction of future outcomes or the testing of hypotheses, on the basis of other related. Unlike fractions are fractions that are different. 0, but for best midframe and corner sharpness you better stick to f/8. See full list on en. 95 II “Dark Knight”. Question 2: Convert 130. Divide Fractions Visit the Fractions Indexto find out even more. The lens exhibits a bit of field curvature, so focusing on the corners instead of the center will give slightly better results (~ 1 stop gain) there (see also C. Com is subject to change without notice. Window Maker 0. We can also: 1. Asiasoft is an online game operator in Thailand, Vietnam, Indonesia, Philippines, Singapore and Malaysia. 95 and f/2. COLLABORATION 0. 95 Patch-01 - Game mod - Download. Concr&232;tement: voil&224; une technique efficace et rapide : Mettons sous forme de fraction irr&233;ductible (c'est &224; dire: simplifions) Nous allons d&233;composer le num&233;rateur et le d&233;nominateur: - par 2: 990 = 2 X 495 et 420 = 2 X 210. 8 Downloads f&252;r Windows, macOS und Linux. Pick the phone and plan that perfectly fits your needs without a contract. Bonjour, O&249; se trouve le corrig&233; de l’&233;valuation sur les fractions CM? 2 was released on February 14th, and it contains just a few commits on top of 0. At CP+ this improved Mark III model was introduced, it has the following specifications: 1. I had read good things about the lens and seen some great sample images, so I thought to myself "why not! Proper fractions 2. If you learn and visualize fractions the easy way, it will be more fun and exciting. Command & Conquer: Generals - Zero Hour - The End of Days v. How to use your in a sentence. As we already learned enough about fractions, which are the part of a whole. Not to be confused with: friction – surface resistance to relative motion; the rubbing of one surface against another; discord, dissidence, antagonism, clash, contention: The. All rights reserved. At head and shoulder portrait distance this lens seems to perform best wide open. Close Focusing Distance: 0. Site by Reeves Design House. Fraction workshop allows users to practice ordering, reducing, adding, subtracting, multiplying, and dividing fractions and mixed numbers. · Take the two numerators (top numbers) and add them up. If denominator is 0, it raises a ZeroDivisionError. · There has always been something magical about that f/0. A fraction (from Latin fractus, "broken") represents a part of a whole or, more generally, any number of equal parts. The lenses are corrected and coated for the visible range from 400 to 700 nm. Information published on ASRock. For example, Fraction of a collection: Fractions also represent parts of a set or collection. Is &92;(&92;frac65&92;) a proper fraction? Visit Planet Organic and explore our unique range of grocery, health and wellbeing, and beauty products. But how do you add 2/4 + 3/9? View real-time stock prices and stock quotes for a full financial overview. Detect packers, cryptors and compilers bundled withPE executables with the help of this reliable piece of software that boasts a high detection rate What&39;s new in PEiD 0. You&39;ll be spinning with knowledge in just a few minutes. Use this fraction calculator for adding, subtracting, multiplying and dividing fractions. Example 1. 5 EV, stopped down to f/2. Dans la version Python ci-dessous, l'algorithme fournit une liste de fractions, toutes de num&233;rateur 1, dont la somme est une fraction donn&233;e f. You can express an integer as a fraction by simply dividing by 1, or you can express any integer as a fraction by simply choosing a numerator and denominator so that the overall value is equal to the integer. Here, you can find out about Canon&39;s S Lenses > 50-85mm > CANON 50mm f/0. Nikon is promising amazing sharpness, and I expect this will be borne out in tests. 6 and beyond. Test f&252;r Ausl&228;ndische Studierende Test for Academic Studies. 95 there are definitely smarter options available. All Rights Reserved. H&230;gt er a&240; s&230;kja um atvinnu, skr&225; sig &237; f&233;lagi&240;, kynna s&233;r lesefni og fleira. Chemisch entwickelte Poster sind Klassiker in Kinder- und Jugendzimmern, in der K&252;che, im Partyraum oder im B&252;ro. 95 APS-C lens for any one of these five mounts: Sony E, Fuji X, Canon M, Nikon Z, and even Micro Four Thirds (MFT). Aspherical Lens A lens with a curved, non-spherical. LA COMPAGNIE. There has always been something magical about that f/0. What type of fraction is 1/4? Its headquarters is located in Ban. Define simple fraction. Rational and returns a Fraction instance with the same value. A&S Building Systems has a long history beginning with its incorporation in 1958 by the two founders of the company, Bill Attebury and Will Schroeder. However many fractions you have, if they have the same bottom numbers, add up all the top numbers. Fraction (plural fractions) 1. Re : Fraction. Hier beginnt der Anstieg durch den Wald in m&228;&223;iger Steigung. Kids will learn to make friends with fractions in these engaging and interactive fractions games. Math Help for Fractions: Easy-to-understand lessons for kids, parents and teachers. (mathematics: part, division) (Maths) fraction nf nom f&233;minin: s'utilise avec les articles "la", "l'" (devant une voyelle ou un h muet), "une". Localiza&231;&227;o: burladingen. Les repr&233;sentations Dans une. I read quite a few reports of similar issues. &187; velkommen inn! Be pampered with an extremely attentive staff and enticed with the finest cuisine. Any photos that do not belong in the group will be promptly removed without notice. · Command & Conquer: Generals - Zero Hour - The End of Days v. BK (Thailand) THB5. 05) Calculate left-tailed value: Our critical z-value. ### 0.95 as a fraction email: [email protected] - phone:(721) 522-1656 x 5419 ### Stewie cool whip - Suppressor homemade -> Angelina jolie eye color -> 194 cm in feet Sitemap 1
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6418177485466003, "perplexity": 11443.525784445985}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178356232.19/warc/CC-MAIN-20210226060147-20210226090147-00339.warc.gz"}
https://eprint.iacr.org/2015/080
## Cryptology ePrint Archive: Report 2015/080 The Fairy-Ring Dance: Password Authenticated Key Exchange in a Group Feng Hao and Xun Yi and Liqun Chen and Siamak F. Shahandashti Abstract: In this paper, we study Password Authenticated Key Exchange (PAKE) in a group. First, we present a generic fairy-ring dance'' construction that transforms any secure two-party PAKE scheme to a group PAKE protocol while preserving the round efficiency in the optimal way. Based on this generic construction, we present two concrete instantiations based on using SPEKE and J-PAKE as the underlying PAKE primitives respectively. The first protocol, called SPEKE+, accomplishes authenticated key exchange in a group with explicit key confirmation in just two rounds. This is more round-efficient than any existing group PAKE protocols in the literature. The second protocol, called J-PAKE+, requires one more round than SPEKE+, but is computationally faster. Finally, we present full implementations of SPEKE+ and J-PAKE+ with detailed performance measurements. Our experiments suggest that both protocols are feasible for practical applications in which the group size may vary from three to several dozen. This makes them useful, as we believe, for a wide range of applications -- e.g., to bootstrap secure communication among a group of smart devices in the Internet of Things (IoT). Category / Keywords: cryptographic protocols / Date: received 3 Feb 2015, last revised 11 Feb 2015 Contact author: haofeng66 at gmail com Available format(s): PDF | BibTeX Citation Note: Updated to be consistent with the camera ready version for IoTPTS'15 Short URL: ia.cr/2015/080 [ Cryptology ePrint archive ]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2494823932647705, "perplexity": 4472.255321430129}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587854.13/warc/CC-MAIN-20211026072759-20211026102759-00007.warc.gz"}
http://worldwidescience.org/topicpages/c/chemical+abundance+analysis.html
#### Sample records for chemical abundance analysis 1. Chemical abundance analysis of 19 barium stars Yang, G C; Spite, M; Chen, Y Q; Zhao, G; Zhang, B; Liu, G Q; Liu, Y J; Liu, N; Deng, L C; Spite, F; Hill, V; Zhang, C X 2016-01-01 We aim at deriving accurate atmospheric parameters and chemical abundances of 19 barium (Ba) stars, including both strong and mild Ba stars, based on the high signal-to-noise ratio and high resolution Echelle spectra obtained from the 2.16 m telescope at Xinglong station of National Astronomical Observatories, Chinese Academy of Sciences. The chemical abundances of the sample stars were obtained from an LTE, plane-parallel and line-blanketed atmospheric model by inputting the atmospheric parameters (effective temperatures, surface gravities, metallicity and microturbulent velocity) and equivalent widths of stellar absorption lines. These samples of Ba stars are giants indicated by atmospheric parameters, metallicities and kinematic analysis about UVW velocity. Chemical abundances of 17 elements were obtained for these Ba stars. Their light elements (O, Na, Mg, Al, Si, Ca, Sc, Ti, V, Cr, Mn and Ni) are similar to the solar abundances. Our samples of Ba stars show obvious overabundances of neutron-capture (n-ca... 2. Principal Component Analysis on Chemical Abundances Spaces Ting, Y S; Kobayashi, C; De Silva, G M; Bland-Hawthorn, J 2011-01-01 [Shortened] In preparation for the HERMES chemical tagging survey of about a million Galactic FGK stars, we estimate the number of independent dimensions of the space defined by the stellar chemical element abundances [X/Fe]. [...] We explore abundances in several environments, including solar neighbourhood thin/thick disk stars, halo metal-poor stars, globular clusters, open clusters, the Large Magellanic Cloud and the Fornax dwarf spheroidal galaxy. [...] We find that, especially at low metallicity, the production of r-process elements is likely to be associated with the production of alpha-elements. This may support the core-collapse supernovae as the r-process site. We also verify the over-abundances of light s-process elements at low metallicity, and find that the relative contribution decreases at higher metallicity, which suggests that this lighter elements primary process may be associated with massive stars. [...] Our analysis reveals two types of core-collapse supernovae: one produces mainly alpha-e... 3. Chemical abundance analysis of 19 barium stars Yang, Guo-Chao; Liang, Yan-Chun; Spite, Monique; Chen, Yu-Qin; Zhao, Gang; Zhang, Bo; Liu, Guo-Qing; Liu, Yu-Juan; Liu, Nian; Deng, Li-Cai; Spite, Francois; Hill, Vanessa; Zhang, Cai-Xia 2016-01-01 We aim at deriving accurate atmospheric parameters and chemical abundances of 19 barium (Ba) stars, including both strong and mild Ba stars, based on the high signal-to-noise ratio and high resolution Echelle spectra obtained from the 2.16 m telescope at Xinglong station of National Astronomical Observatories, Chinese Academy of Sciences. The chemical abundances of the sample stars were obtained from an LTE, plane-parallel and line-blanketed atmospheric model by inputting the atmospheric parameters (effective temperatures Teff, surface gravities log g, metallicity [Fe/H] and microturbulence velocity ξt) and equivalent widths of stellar absorption lines. These samples of Ba stars are giants as indicated by atmospheric parameters, metallicities and kinematic analysis about UVW velocity. Chemical abundances of 17 elements were obtained for these Ba stars. Their Na, Al, α- and iron-peak elements (O, Na, Mg, Al, Si, Ca, Sc, Ti, V, Cr, Mn, Ni) are similar to the solar abundances. Our samples of Ba stars show obvious overabundances of neutron-capture (n-capture) process elements relative to the Sun. Their median abundances of [Ba/Fe], [La/Fe] and [Eu/Fe] are 0.54, 0.65 and 0.40, respectively. The Y I and Zr I abundances are lower than Ba, La and Eu, but higher than the α- and iron-peak elements for the strong Ba stars and similar to the iron-peak elements for the mild stars. There exists a positive correlation between Ba intensity and [Ba/Fe]. For the n-capture elements (Y, Zr, Ba, La), there is an anti-correlation between their [X/Fe] and [Fe/H]. We identify nine of our sample stars as strong Ba stars with [Ba/Fe] >0.6 where seven of them have Ba intensity Ba=2-5, one has Ba=1.5 and another one has Ba=1.0. The remaining ten stars are classified as mild Ba stars with 0.17<[Ba/Fe] <0.54. 4. Precision Chemical Abundance Measurements Yong, David; Grundahl, Frank; Meléndez, Jorge; 2012-01-01 This talk covers preliminary work in which we apply a strictly differential line-by-line chemical abundance analysis to high quality UVES spectra of the globular cluster NGC 6752. We achieve extremely high precision in the measurement of relative abundance ratios. Our results indicate that the ob... 5. Chemical abundance analysis of the Open Clusters Berkeley 32, NGC 752, Hyades and Praesepe Carrera, R 2011-01-01 Context. Open clusters are ideal test particles to study the chemical evolution of the Galactic disc. However the existing high-resolution abundance determinations, not only of [Fe/H], but also of other key elements, is largely insufficient at the moment. Aims. To increase the number of Galactic open clusters with high quality abundance determinations, and to gather all the literature determinations published so far. Methods. Using high-resolution (R~30000), high-quality (S/N$>60 per pixel), we obtained spectra for twelve stars in four open clusters with the fiber spectrograph FOCES, at the 2.2 Calar Alto Telescope in Spain. We use the classical equivalent widths analysis to obtain accurate abundances of sixteen elements: Al, Ba, Ca, Co, Cr, Fe, La, Mg, Na, Nd, Ni, Sc, Si, Ti, V, Y. Oxygen abundances have been derived through spectral synthesis of the 6300 A forbidden line. Results. We provide the first determination of abundance ratios other than Fe for NGC 752 giants, and ratios in agreement with the litera... 6. HE0107-5240, A Chemically Ancient Star.I. A Detailed Abundance Analysis Christlieb, N; Korn, A J; Barklem, P S; Beers, T C; Bessell, M S; Karlsson, T; Mizuno-Wiedner, M 2004-01-01 We report a detailed abundance analysis for HE0107-5240, a halo giant with [Fe/H]_NLTE=-5.3. This star was discovered in the course of follow-up medium-resolution spectroscopy of extremely metal-poor candidates selected from the digitized Hamburg/ESO objective-prism survey. On the basis of high-resolution VLT/UVES spectra, we derive abundances for 8 elements (C, N, Na, Mg, Ca, Ti, Fe, and Ni), and upper limits for another 12 elements. A plane-parallel LTE model atmosphere has been specifically tailored for the chemical composition of {\\he}. Scenarios for the origin of the abundance pattern observed in the star are discussed. We argue that HE0107-5240 is most likely not a post-AGB star, and that the extremely low abundances of the iron-peak, and other elements, are not due to selective dust depletion. The abundance pattern of HE0107-5240 can be explained by pre-enrichment from a zero-metallicity type-II supernova of 20-25M_Sun, plus either self-enrichment with C and N, or production of these elements in the AG... 7. Abundance analysis of an extended sample of open clusters: A search for chemical inhomogeneities Reddy, Arumalla B. S.; Giridhar, Sunetra; Lambert, David L. We have initiated a program to explore the presence of chemical inhomogeneities in the Galactic disk using the open clusters as ideal probes. We have analyzed high-dispersion echelle spectra (R ≥ 55,000) of red giant members for eleven open clusters to derive abundances for many elements. The membership to the cluster has been confirmed through their radial velocities and proper motions. The spread in temperatures and gravities being very small among the red giants, nearly the same stellar lines were employed thereby reducing the random errors. The errors of average abundance for the cluster were generally in 0.02 to 0.07 dex range. Our present sample covers galactocentric distances of 8.3 to 11.3 kpc and an age range of 0.2 to 4.3 Gyrs. Our earlier analysis of four open clusters (Reddy A.B.S. et al., 2012, MNRAS, 419,1350) indicate that abundances relative to Fe for elements from Na to Eu are equal within measurement uncertainties to published abundances for thin disk giants in the field. This supports the view that field stars come from disrupted open clusters. In the enlarged sample of eleven open clusters we find cluster to cluster abundance variations for some s- and r- process elements, with certain elements such as Zr and Ba showing large variation. These differences mark the signatures that these clusters had formed under different environmental conditions (Type II SN, Type Ia SN, AGB stars or a mixture of any of these) unique to the time and site of formation. These eleven clusters support the widely held impression that there is an abundance gradient such that the metallicity [Fe/H] at the solar galactocentric distance decreases outwards at about -0.1 dex per kpc. 8. Chemical analysis of CH stars - II: atmospheric parameters and elemental abundances Karinkuzhi, Drisya 2014-01-01 We present detailed chemical analyses for a sample of twelve stars selected from the CH star catalogue of Bartkevicius (1996). The sample includes two confirmed binaries, four objects that are known to show radial velocity variations and the rest with no information on the binary status. A primary objective is to examine if all these objects exhibit chemical abundances characteristics of CH stars, based on detailed chemical composition study using high resolution spectra. We have used high resolution (R ~ 42000) spectra from the ELODIE archive. These spectra cover 3900 to 6800 Angstrom in the wavelength range. We have estimated the stellar atmospheric parameters, the effective temperature Teff, the surface gravity log g, and metallicity [Fe/H] from LTE analysis using model atmospheres. Estimated temperatures of these objects cover a wide range from 4200 K to 6640 K, the surface gravity from 0.6 to 4.3 and metallicity from -0.13 to -1.5. We report updates on elemental abundances for several heavy elements, Sr,... 9. A new comprehensive set of elemental abundances in DLAs - II. Data analysis and chemical variation studies Dessauges-Zavadsky, M; D'Odorico, S; Calura, F; Matteucci, F 2005-01-01 We present new elemental abundance studies of seven damped Lyman-alpha systems (DLAs). Together with the four DLAs analyzed in Dessauges-Zavadsky et al. (2004), we have a sample of eleven DLA galaxies with uniquely comprehensive and homogeneous abundance measurements. These observations allow one to study the abundance patterns of 22 elements and the chemical variations in the interstellar medium of galaxies outside the Local Group. Comparing the gas-phase abundance ratios of these high redshift galaxies, we found that they show low RMS dispersions, reaching only up 2-3 times the statistical errors for the majority of elements. This uniformity is remarkable given that the quasar sightlines cross gaseous regions with HI column densities spanning over one order of magnitude and metallicities ranging from 1/55 to 1/5 solar. The gas-phase abundance patterns of interstellar medium clouds within the DLA galaxies detected along the velocity profiles show, on the other hand, a high dispersion in several abundance rat... 10. Chemical analysis of CH stars - I: atmospheric parameters and elemental abundances Karinkuzhi, Drisya 2014-01-01 Results from high-resolution spectral analyses of a selected sample of CH stars are presented. Detailed chemical composition studies of these objects, which could reveal abundance patterns that in turn provide information regarding nucleosynthesis and evolutionary status, are scarce in the literature. We conducted detailed chemical composition studies for these objects based on high resolution (R ~ 42000) spectra. The spectra were taken from the ELODIE archive and cover the wavelength range from 3900 to 6800 A, in the wavelength range. We estimated the stellar atmospheric parameters, the effective temperature Teff, the surface gravity log g, and metallicity [Fe/H] from Local thermodynamic equilibrium analyses using model atmospheres. Estimated temperatures of these objects cover a wide range from 4550 K to 6030 K, the surface gravity from 1.8 to 3.8 and metallicity from -0.18 to -1.4. We report updates on elemental abundances for several heavy elements and present estimates of abundance ratios of Sr, Y, Zr, B... 11. Chemical abundance analysis of the old, rich open cluster Trumpler 20 Carraro, Giovanni; Monaco, Lorenzo; Beccari, Giacomo; Ahumada, Javier; Boffin, Henri 2014-01-01 Trumpler 20 is an open cluster located at low Galactic longitude, just beyond the great Carina spiral arm, and whose metallicity and fundamental parameters were very poorly known until now. As it is most likely a rare example of an old, rich open cluster -- possibly a twin of NGC 7789 -- it is useful to characterize it. To this end, we determine here the abundance of several elements and their ratios in a sample of stars in the clump of Trumpler 20. The primary goal is to measure Trumpler 20 metallicity, so far very poorly constrained, and revise the cluster's fundamental parameters. We present high-resolution spectroscopy of eight clump stars. Based on their radial velocities, we identify six bona fide cluster members, and for five of them (the sixth being a fast rotator) we perform a detailed abundance analysis. We find that Trumpler 20 is slightly more metal-rich than the Sun, having [Fe/H]=+0.09$\\pm$0.10. The abundance ratios of alpha-elements are generally solar. In line with recent studies of clusters a... 12. Origin of Cosmic Chemical Abundances Maio, Umberto 2015-01-01 Cosmological N-body hydrodynamic computations following atomic and molecular chemistry (e$^-$, H, H$^+$, H$^-$, He, He$^+$, He$^{++}$, D, D$^+$, H$_2$, H$_2^+$, HD, HeH$^+$), gas cooling, star formation and production of heavy elements (C, N, O, Ne, Mg, Si, S, Ca, Fe, etc.) from stars covering a range of mass and metallicity are used to explore the origin of several chemical abundance patterns and to study both the metal and molecular content during simulated galaxy assembly. The resulting trends show a remarkable similarity to up-to-date observations of the most metal-poor damped Lyman-$\\alpha$absorbers at redshift$z\\gtrsim 2$. These exhibit a transient nature and represent collapsing gaseous structures captured while cooling is becoming effective in lowering the temperature below$\\sim 10^4\\,\\rm K$, before they are disrupted by episodes of star formation or tidal effects. Our theoretical results agree with the available data for typical elemental ratios, such as [C/O], [Si/Fe], [O/Fe], [Si/O], [Fe/H], [O/... 13. Solar System chemical abundances corrected for systematics Gonzalez, Guillermo 2014-01-01 The relative chemical abundances between CI meteorites and the solar photosphere exhibit a significant trend with condensation temperature. A trend with condensation temperature is also seen when the solar photospheric abundances are compared to those of nearby solar twins. We use both these trends to determine the alteration of the elemental abundances of the meteorties and the photosphere by fractionation and calculate a new set of primordial Solar System abundances. 14. ANALYSIS OF TWO SMALL MAGELLANIC CLOUD H II REGIONS CONSIDERING THERMAL INHOMOGENEITIES: IMPLICATIONS FOR THE DETERMINATIONS OF EXTRAGALACTIC CHEMICAL ABUNDANCES We present long-slit spectrophotometry considering the presence of thermal inhomogeneities (t2) of two H II regions in the Small Magellanic Cloud (SMC): NGC 456 and NGC 460. Physical conditions and chemical abundances were determined for three positions in NGC 456 and one position in NGC 460, first under the assumption of uniform temperature and then allowing for the possibility of thermal inhomogeneities. We determined t2 values based on three different methods: (1) by comparing the temperature derived using oxygen forbidden lines with the temperature derived using helium recombination lines (RLs), (2) by comparing the abundances derived from oxygen forbidden lines with those derived from oxygen RLs, and (3) by comparing the abundances derived from ultraviolet carbon forbidden lines with those derived from optical carbon RLs. The first two methods averaged t2 = 0.067 ± 0.013 for NGC 456 and t2 = 0.036 ± 0.027 for NGC 460. These values of t2 imply that when gaseous abundances are determined with collisionally excited lines they are underestimated by a factor of nearly two. From these objects and others in the literature, we find that in order to account for thermal inhomogeneities and dust depletion, the O/H ratio in low-metallicity H II regions should be corrected by 0.25-0.45 dex depending on the thermal structure of the nebula or by 0.35 dex if such information is not available. 15. Chemical abundances and kinematics of 257 G-, K-type field giants. Setting a base for further analysis of giant-planet properties orbiting evolved stars Adibekyan, V Zh; Santos, N C; Alves, S; Lovis, C; Udry, S; Israelian, G; Sousa, S G; Tsantaki, M; Mortier, A; Sozzetti, A; De Medeiros, J R 2015-01-01 We performed a uniform and detailed abundance analysis of 12 refractory elements (Na, Mg, Al, Si, Ca, Ti, Cr, Ni, Co, Sc, Mn, and V) for a sample of 257 G- and K-type evolved stars from the CORALIE planet search program. To date, only one of these stars is known to harbor a planetary companion. We aimed to characterize this large sample of evolved stars in terms of chemical abundances and kinematics, thus setting a solid base for further analysis of planetary properties around giant stars. This sample, being homogeneously analyzed, can be used as a comparison sample for other planet-related studies, as well as for different type of studies related to stellar and Galaxy astrophysics. The abundances of the chemical elements were determined using an LTE abundance analysis relative to the Sun, with the spectral synthesis code MOOG and a grid of Kurucz ATLAS9 atmospheres. To separate the Galactic stellar populations both a purely kinematical approach and a chemical method were applied. We confirm the overabundance... 16. A high precision chemical abundance analysis of the HAT-P-1 stellar binary: constraints on planet formation Liu, F; Asplund, M.; Ramirez, I.; Yong, D.; Melendez, J. 2014-01-01 We present a high-precision, differential elemental abundance analysis of the HAT-P-1 stellar binary based on high-resolution, high signal-to-noise ratio Keck/HIRES spectra. The secondary star in this double system is known to host a transiting giant planet while no planets have yet been detected around the primary star. The derived metallicities ([Fe/H]) of the primary and secondary stars are identical within the errors:$0.146 \\pm 0.014$dex ($\\sigma$= 0.033 dex) and$0.155 \\pm 0.007$dex ... 17. Chemical Abundances and Milky Way Formation Gilmore, Gerry; Wyse, Rosemary F. G. 2004-01-01 Stellar chemical element ratios have well-defined systematic trends as a function of abundance, with an excellent correlation of these trends with stellar populations defined kinematically. This is remarkable, and has significant implications for Galactic evolution. The source function, the stellar Initial Mass Function, must be nearly invariant with time, place and metallicity. Each forming star must see a well-mixed mass-averaged IMF yield, implying low star formation rates, with most star ... 18. Chemical abundances and kinematics of barium stars de Castro, D. B.; Pereira, C. B.; Roig, F.; Jilinski, E.; Drake, N. A.; Chavero, C.; Silva, J. V. Sales 2016-04-01 In this paper we present an homogeneous analysis of photospheric abundances based on high-resolution spectroscopy of a sample of 182 barium stars and candidates. We determined atmospheric parameters, spectroscopic distances, stellar masses, ages, luminosities and scale height, radial velocities, abundances of the Na, Al, alpha-elements, iron-peak elements, and s-process elements Y, Zr, La, Ce, and Nd. We employed the local-thermodynamic-equilibrium model atmospheres of Kurucz and the spectral analysis code MOOG. We found that the metallicities, the temperatures and the surface gravities for barium stars can not be represented by a single gaussian distribution. The abundances of alpha-elements and iron peak elements are similar to those of field giants with the same metallicity. Sodium presents some degree of enrichment in more evolved stars that could be attributed to the NeNa cycle. As expected, the barium stars show overabundance of the elements created by the s-process. By measuring the mean heavy-element abundance pattern as given by the ratio [s/Fe], we found that the barium stars present several degrees of enrichment. We also obtained the [hs/ls] ratio by measuring the photospheric abundances of the Ba-peak and the Zr-peak elements. Our results indicated that the [s/Fe] and the [hs/ls] ratios are strongly anti-correlated with the metallicity. Our kinematical analysis showed that 90% of the barium stars belong to the thin disk population. Based on their luminosities, none of the barium stars are luminous enough to be an AGB star, nor to become self-enriched in the s-process elements. Finally, we determined that the barium stars also follow an age-metallicity relation. 19. Chemical abundances and kinematics of barium stars de Castro, D. B.; Pereira, C. B.; Roig, F.; Jilinski, E.; Drake, N. A.; Chavero, C.; Sales Silva, J. V. 2016-07-01 In this paper, we present an homogeneous analysis of photospheric abundances based on high-resolution spectroscopy of a sample of 182 barium stars and candidates. We determined atmospheric parameters, spectroscopic distances, stellar masses, ages, luminosities and scaleheight, radial velocities, abundances of the Na, Al, α-elements, iron-peak elements, and s-process elements Y, Zr, La, Ce, and Nd. We employed the local thermodynamic equilibrium model atmospheres of Kurucz and the spectral analysis code MOOG. We found that the metallicities, the temperatures and the surface gravities for barium stars cannot be represented by a single Gaussian distribution. The abundances of α-elements and iron peak elements are similar to those of field giants with the same metallicity. Sodium presents some degree of enrichment in more evolved stars that could be attributed to the NeNa cycle. As expected, the barium stars show overabundance of the elements created by the s-process. By measuring the mean heavy-element abundance pattern as given by the ratio [s/Fe], we found that the barium stars present several degrees of enrichment. We also obtained the [hs/ls] ratio by measuring the photospheric abundances of the Ba-peak and the Zr-peak elements. Our results indicated that the [s/Fe] and the [hs/ls] ratios are strongly anticorrelated with the metallicity. Our kinematical analysis showed that 90 per cent of the barium stars belong to the thin disc population. Based on their luminosities, none of the barium stars are luminous enough to be an asymptotic giant branch star, nor to become self-enriched in the s-process elements. Finally, we determined that the barium stars also follow an age-metallicity relation. 20. Chemical Constraints on the Oxygen Abundances in Jupiter and Saturn Wang, Dong 2012-01-01 We perform a comparative analysis of the chemical kinetics of CO and$\\rm PH_3$in Jupiter and Saturn to assess the full set of constraints available on the troposphere water abundance in the two giant planets. For carbon monoxide we employ both a widely used CO kinetic scheme from Yung et al, and a newly identified CO chemical scheme from Visscher and Moses. For$\\rm PH_3$chemical scheme, we use the same chemical scheme as in Visscher and Fegley. Yung's chemical scheme for CO yields a water enrichment of 0.95 - 23.0 times solar abundance on Jupiter, and an upper limit of 14.0 for Saturn. Visscher's chemical scheme in contrast produces a water enrichment of 0.24 - 2.6 times solar abundance in Jupiter, and for Saturn an upper limit for water enrichment of 8.0. From this scheme, which takes advantage of the most up-to-date kinetics, we preclude high water enrichments on Jupiter and Saturn, and show that the kinetics approach yields Jovian bulk abundance in which values of C/O elevated relative to solar are adm... 1. Chemical abundances and kinematics of barium stars de Castro, D B; Roig, F; Jilinski, E; Drake, N A; Chavero, C; Silva, J V Sales 2016-01-01 In this paper we present an homogeneous analysis of photospheric abundances based on high-resolution spectroscopy of a sample of 182 barium stars and candidates. We determined atmospheric parameters, spectroscopic distances, stellar masses, ages, luminosities and scale height, radial velocities, abundances of the Na, Al,$alpha$-elements, iron-peak elements, and s-process elements Y, Zr, La, Ce, and Nd. We employed the local-thermodynamic-equilibrium model atmospheres of Kurucz and the spectral analysis code {\\sc moog}. We found that the metallicities, the temperatures and the surface gravities for barium stars can not be represented by a single gaussian distribution. The abundances of$alpha$-elements and iron peak elements are similar to those of field giants with the same metallicity. Sodium presents some degree of enrichment in more evolved stars that could be attributed to the NeNa cycle. As expected, the barium stars show overabundance of the elements created by the s-process. By measuring the mean heav... 2. Circumstellar molecular composition of the oxygen-rich AGB star IK Tau: I. Observations and LTE chemical abundance analysis Kim, Hyunjoo; Menten, Karl M; Decin, Leen 2010-01-01 The aim of this paper is to study the molecular composition in the circumstellar envelope around the oxygen-rich star IK Tau. We observed IK Tau in several (sub)millimeter bands using the APEX telescope during three observing periods. To determine the spatial distribution of the$\\mathrm{^{12}CO(3-2)}$emission, mapping observations were performed. To constrain the physical conditions in the circumstellar envelope, multiple rotational CO emission lines were modeled using a non local thermodynamic equilibrium radiative transfer code. The rotational temperatures and the abundances of the other molecules were obtained assuming local thermodynamic equilibrium. An oxygen-rich Asymptotic Giant Branch star has been surveyed in the submillimeter wavelength range. Thirty four transitions of twelve molecular species, including maser lines, were detected. The kinetic temperature of the envelope was determined and the molecular abundance fractions of the molecules were estimated. The deduced molecular abundances were com... 3. Analysis of chemical abundances in planetary nebulae with [WC] central stars. I. Line intensities and physical conditions García-Rojas, J.; Peña, M.; Morisset, C.; Mesa-Delgado, A.; Ruiz, M. T. 2012-02-01 Context. Planetary nebulae (PNe) around Wolf-Rayet [WR] central stars ([WR]PNe) constitute a particular photoionized nebula class that represents about 10% of the PNe with classified central stars. Aims: We analyse deep high-resolution spectrophotometric data of 12 [WR] PNe. This sample of [WR]PNe represents the most extensive analysed so far, at such high spectral resolution. We aim to select the optimal physical conditions in the nebulae to be used in ionic abundance calculations that will be presented in a forthcoming paper. Methods: We acquired spectra at Las Campanas Observatory with the 6.5-m telescope and the Magellan Inamori Kyocera (MIKE) spectrograph, covering a wavelength range from 3350 Å to 9400 Å. The spectra were exposed deep enough to detect, with signal-to-noise ratio higher than three, the weak optical recombination lines (ORLs) of O ii, C ii, and other species. We detect and identify about 2980 emission lines, which, to date, is the most complete set of spectrophotometric data published for this type of objects. From our deep data, numerous diagnostic line ratios for Te and ne are determined from collisionally excited lines (CELs), ORLs, and continuum measurements (H i Paschen continuum in particular). Results: Densities are closely described by the average of all determined values for objects with ne behaviour of both temperatures agrees with the predictions of the temperature fluctuations paradigm, owing to the large errors in Te(H i). We do not find any evidence of low-temperature, high-density clumps in our [WR]PNe from the analysis of faint O ii and N ii plasma diagnostics, although uncertainties dominate the observed line ratios in most objects. The behaviour of Te([O iii])/Te([N ii]), which is smaller for high ionization degrees, can be reproduced by a set of combined matter-bounded and radiation-bounded models, although, for the smallest temperature ratios, a too high metallicity seem to be required. Based on data obtained at Las 4. Chemical abundance analysis of the Open Clusters Cr110, NGC2099 (M37), NGC2420, NGC7789 and M67 (NGC2682) Pancino, E; Rossetti, E; Gallart, C 2009-01-01 (Abridged) The present number of Galactic Open Clusters that have high-resolution abundance determinations, not only of [Fe/H], but also of other key elements, is largely insufficient to enable a clear modeling of the Galactic Disk chemical evolution. We obtained high-resolution (R~30000), high quality (S/N~50-100 per pixel), echelle spectra with FOCES, at Calar Alto, for 3 red clump stars in each of five Open Clusters. We used the classical Equivalent Width analysis method to obtain accurate abundances of 16 elements. We also derived the oxygen abundance through spectral synthesis of the 6300A forbidden line. Three of the clusters were never studied previously with high-resolution: we found [Fe/H]=+0.03 +/- 0.02 dex for Cr110; [Fe/H]=+0.01 +/- 0.05 dex for NGC2099 (M37) and [Fe/H]=-0.05 +/- 0.03 dex for NGC2420. For the remaining clusters, we find: [Fe/H]=+0.05 +/- 0.02 dex for M67 and [Fe/H]=+0.04 +/- 0.07 dex for NGC7789. We provide the first high-resolution based velocity estimate for Cr110, V=41.0 +/- 3.... 5. ASPCAP: The Apogee Stellar Parameter and Chemical Abundances Pipeline Pérez, Ana E García; Holtzman, Jon A; Shetrone, Matthew; Mészáros, Szabolcs; Bizyaev, Dmitry; Carrera, Ricardo; Cunha, Katia; García-Hernández, D A; Johnson, Jennifer A; Majewski, Steven R; Nidever, David L; Schiavon, Ricardo P; Shane, Neville; Smith, Verne V; Sobeck, Jennifer; Troup, Nicholas; Zamora, Olga; Bovy, Jo; Eisenstein, Daniel J; Feuillet, Diane; Frinchaboy, Peter M; Hayden, Michael R; Hearty, Fred R; Nguyen, Duy C; O'Connell, Robert W; Pinsonneault, Marc H; Weinberg, David H; Wilson, John C; Zasowski, Gail 2015-01-01 The Apache Point Observatory Galactic Evolution Experiment (APOGEE) has built the largest moderately high-resolution (R=22, 500) spectroscopic map of the stars across the Milky Way, and including dust-obscured areas. The APOGEE Stellar Parameter and Chemical Abundances Pipeline (ASPCAP) is the software developed for the automated analysis of these spectra. ASPCAP determines atmospheric parameters and chemical abundances from observed spectra by comparing observed spectra to libraries of theoretical spectra, using chi-2 minimization in a multidimensional parameter space. The package consists of a fortran90 code that does the actual minimization, and a wrapper IDL code for book-keeping and data handling. This paper explains in detail the ASPCAP components and functionality, and presents results from a number of tests designed to check its performance. ASPCAP provides stellar effective temperatures, surface gravities, and metallicities precise to 2%, 0.1 dex, and 0.05 dex, respectively, for most APOGEE stars, wh... 6. Chemical abundances of distant extremely metal-poor unevolved stars Bonifacio, P; Caffau, E; Ludwig, H -G; Spite, M; Hernández, J I González; Behara, N T 2012-01-01 Aims: The purpose of our study is to determine the chemical composition of a sample of 16 candidate Extremely Metal-Poor (EMP) dwarf stars, extracted from the Sloan Digital Sky Survey (SDSS). There are two main purposes: in the first place to verify the reliability of the metallicity estimates derived from the SDSS spectra; in the second place to see if the abundance trends found for the brighter nearer stars studied previously also hold for this sample of fainter, more distant stars. Methods: We used the UVES at the VLT to obtain high-resolution spectra of the programme stars. The abundances were determined by an automatic analysis with the MyGIsFOS code, with the exception of lithium, for which the abundances were determined from the measured equivalent widths of the Li I resonance doublet. Results: All candidates are confirmed to be EMP stars, with [Fe/H]<= -3.0. The chemical composition of the sample of stars is similar to that of brighter and nearer samples. We measured the lithium abundance for 12 st... 7. ASPCAP: The APOGEE Stellar Parameter and Chemical Abundances Pipeline García Pérez, Ana E.; Allende Prieto, Carlos; Holtzman, Jon A.; Shetrone, Matthew; Mészáros, Szabolcs; Bizyaev, Dmitry; Carrera, Ricardo; Cunha, Katia; García-Hernández, D. A.; Johnson, Jennifer A.; Majewski, Steven R.; Nidever, David L.; Schiavon, Ricardo P.; Shane, Neville; Smith, Verne V.; Sobeck, Jennifer; Troup, Nicholas; Zamora, Olga; Weinberg, David H.; Bovy, Jo; Eisenstein, Daniel J.; Feuillet, Diane; Frinchaboy, Peter M.; Hayden, Michael R.; Hearty, Fred R.; Nguyen, Duy C.; O’Connell, Robert W.; Pinsonneault, Marc H.; Wilson, John C.; Zasowski, Gail 2016-06-01 The Apache Point Observatory Galactic Evolution Experiment (APOGEE) has built the largest moderately high-resolution (R ≈ 22,500) spectroscopic map of the stars across the Milky Way, and including dust-obscured areas. The APOGEE Stellar Parameter and Chemical Abundances Pipeline (ASPCAP) is the software developed for the automated analysis of these spectra. ASPCAP determines atmospheric parameters and chemical abundances from observed spectra by comparing observed spectra to libraries of theoretical spectra, using χ2 minimization in a multidimensional parameter space. The package consists of a fortran90 code that does the actual minimization and a wrapper IDL code for book-keeping and data handling. This paper explains in detail the ASPCAP components and functionality, and presents results from a number of tests designed to check its performance. ASPCAP provides stellar effective temperatures, surface gravities, and metallicities precise to 2%, 0.1 dex, and 0.05 dex, respectively, for most APOGEE stars, which are predominantly giants. It also provides abundances for up to 15 chemical elements with various levels of precision, typically under 0.1 dex. The final data release (DR12) of the Sloan Digital Sky Survey III contains an APOGEE database of more than 150,000 stars. ASPCAP development continues in the SDSS-IV APOGEE-2 survey. 8. Chemical abundances of A-type dwarfs in the young open cluster M6 Kílíçoǧlu, T.; Monier, R.; Fossati, L. 2011-12-01 Elemental abundance analysis of five members in the open cluster M6 (age ˜90 myr) were performed using FLAMES-GIRAFFE spectrograph mounted on 8-meter class VLT telescopes. The abundances of 14 chemical elements were derived. Johnson and Geneva photometric systems, hydrogen line profile fittings, and ionization equilibrium were used to derive the atmospheric parameters of the stars. Synthetic spectra were compared to the observed spectra to derive chemical abundances. The abundance analysis of these five members shows that these stars have an enhancement (or solar composition) of metals in general, with some exceptions. C, O, Ca, Sc, Ni, Y, and Ba exhibit the largest star-to-star abundance variations. 9. Evolution of chemical abundances in Seyfert galaxies Ballero, S. K.; Matteucci, F; Ciotti, L.; Calura, F; P. Padovani 2007-01-01 We computed the chemical evolution of spiral bulges hosting Seyfert nuclei, based on updated chemical and spectro-photometrical evolution models for the bulge of our Galaxy, made predictions about other quantities measured in Seyferts, and modeled the photometry of local bulges. The chemical evolution model contains detailed calculations of the Galactic potential and of the feedback from the central supermassive black hole, and the spectro-photometric model covers a wide range of stellar ages... 10. Chemical Abundance Inhomogeneities in Globular Cluster Stars Cohen, Judith G. 2004-01-01 It is now clear that abundance variations from star-to-star among the light elements, particularly C, N, O, Na and Al, are ubiquitous within galactic globular clusters; they appear seen whenever data of high quality is obtained for a sufficiently large sample of stars within such a cluster. The correlations and anti-correlations among these elements and the range of variation of each element appear to be independent of stellar evolutionary state, with the exception that enhanced depletion of ... 11. Model reduction for stochastic chemical systems with abundant species Smith, Stephen; Cianci, Claudia; Grima, Ramon 2015-12-01 Biochemical processes typically involve many chemical species, some in abundance and some in low molecule numbers. We first identify the rate constant limits under which the concentrations of a given set of species will tend to infinity (the abundant species) while the concentrations of all other species remains constant (the non-abundant species). Subsequently, we prove that, in this limit, the fluctuations in the molecule numbers of non-abundant species are accurately described by a hybrid stochastic description consisting of a chemical master equation coupled to deterministic rate equations. This is a reduced description when compared to the conventional chemical master equation which describes the fluctuations in both abundant and non-abundant species. We show that the reduced master equation can be solved exactly for a number of biochemical networks involving gene expression and enzyme catalysis, whose conventional chemical master equation description is analytically impenetrable. We use the linear noise approximation to obtain approximate expressions for the difference between the variance of fluctuations in the non-abundant species as predicted by the hybrid approach and by the conventional chemical master equation. Furthermore, we show that surprisingly, irrespective of any separation in the mean molecule numbers of various species, the conventional and hybrid master equations exactly agree for a class of chemical systems. 12. Model reduction for stochastic chemical systems with abundant species Smith, Stephen; Cianci, Claudia; Grima, Ramon [School of Biological Sciences, University of Edinburgh, Mayfield Road, Edinburgh EH93JR, Scotland (United Kingdom) 2015-12-07 Biochemical processes typically involve many chemical species, some in abundance and some in low molecule numbers. We first identify the rate constant limits under which the concentrations of a given set of species will tend to infinity (the abundant species) while the concentrations of all other species remains constant (the non-abundant species). Subsequently, we prove that, in this limit, the fluctuations in the molecule numbers of non-abundant species are accurately described by a hybrid stochastic description consisting of a chemical master equation coupled to deterministic rate equations. This is a reduced description when compared to the conventional chemical master equation which describes the fluctuations in both abundant and non-abundant species. We show that the reduced master equation can be solved exactly for a number of biochemical networks involving gene expression and enzyme catalysis, whose conventional chemical master equation description is analytically impenetrable. We use the linear noise approximation to obtain approximate expressions for the difference between the variance of fluctuations in the non-abundant species as predicted by the hybrid approach and by the conventional chemical master equation. Furthermore, we show that surprisingly, irrespective of any separation in the mean molecule numbers of various species, the conventional and hybrid master equations exactly agree for a class of chemical systems. 13. Stellar Chemical Abundances: In Pursuit of the Highest Achievable Precision Bedell, M; Bean, J; Ramirez, I; Leite, P; Asplund, M 2014-01-01 The achievable level of precision on photospheric abundances of stars is a major limiting factor on investigations of exoplanet host star characteristics, the chemical histories of star clusters, and the evolution of the Milky Way and other galaxies. While model-induced errors can be minimized through the differential analysis of spectrally similar stars, the maximum achievable precision of this technique has been debated. As a test, we derive differential abundances of 19 elements from high-quality asteroid-reflected solar spectra taken using a variety of instruments and conditions. We treat the solar spectra as being from unknown stars and use the resulting differential abundances, which are expected to be zero, as a diagnostic of the error in our measurements. Our results indicate that the relative resolution of the target and reference spectra is a major consideration, with use of different instruments to obtain the two spectra leading to errors up to 0.04 dex. Use of the same instrument at different epoc... 14. Evolution of chemical abundances in Seyfert galaxies Ballero, S K; Ciotti, L; Calura, F; Padovani, P 2007-01-01 We computed the chemical evolution of spiral bulges hosting Seyfert nuclei, based on updated chemical and spectro-photometrical evolution models for the bulge of our Galaxy, made predictions about other quantities measured in Seyferts, and modeled the photometry of local bulges. The chemical evolution model contains detailed calculations of the Galactic potential and of the feedback from the central supermassive black hole, and the spectro-photometric model covers a wide range of stellar ages and metallicities. We followed the evolution of bulges in the mass range 10^9 - 10^{11} Msun by scaling the star formation efficiency and the bulge scalelength as in the inverse-wind scenario for elliptical galaxies, and considering an Eddington limited accretion onto the central supermassive black hole. We successfully reproduced the observed black hole-host bulge mass relation. The observed nuclear bolometric luminosity is reproduced only at high redshift or for the most massive bulges; in the other cases, at z = 0 a r... 15. Constraints on chemical evolution models from QSOALS abundances Lauroesch, J. T. 1993-01-01 Models of the formation and early chemical evolution of our Galaxy are guided and constrained by our knowledge of abundances in globular cluster stars and halo field stars. The abundance patterns identified in halo and disk stars should be discernible in absorption lines of gas clouds in forming galaxies which are accidentally lying in front of background QSO's. Conversely, the ensemble of QSO absorption line systems (QSOALS) at each redshift may suggest a detailed model for the formation of our Galaxy that is testable using abundance patterns in halo stars. 16. Chemical abundances in LMC stellar populations. II. The bar sample Van der Swaelmen, M; Primas, F; Cole, A A 2013-01-01 This paper compares the chemical evolution of the Large Magellanic Cloud (LMC) to that of the Milky Way (MW) and investigates the relation between the bar and the inner disc of the LMC in the context of the formation of the bar. We obtained high-resolution and mid signal-to-noise ratio spectra with FLAMES/GIRAFFE at ESO/VLT and performed a detailed chemical analysis of 106 and 58 LMC field red giant stars (mostly older than 1 Gyr), located in the bar and the disc of the LMC respectively. We measured elemental abundances for O, Mg, Si, Ca, Ti, Na, Sc, V, Cr, Co, Ni, Cu, Y, Zr, Ba, La and Eu. We find that the {\\alpha}-element ratios [Mg/Fe] and [O/Fe] are lower in the LMC than in the MW while the LMC has similar [Si/Fe], [Ca/Fe], and [Ti/Fe] to the MW. As for the heavy elements, [Ba,La/Eu] exhibit a strong increase with increasing metallicity starting from [Fe/H]=-0.8 dex, and the LMC has lower [Y+Zr/Ba+La] ratios than the MW. Cu is almost constant over all metallicities and about 0.5 dex lower in the LMC than ... 17. Oxygen abundances and the chemical evolution of spiral galaxies Tosi., M; Angeles I. Díaz 1983-01-01 This is an electronic version of an article published in Memorie della Società Astronomica Italiana. Tosi, M. and A. I. Díaz. Oxygen abundances and the chemical evolution of spiral galaxies. Memorie della Società Astronomica Italiana 54, 4 (1983): 889-890 18. Chemical abundances from planetary nebulae in local spiral galaxies Richer, M G 2015-01-01 While the chemical abudances observed in bright planetary nebulae in local spiral galaxies are less varied than their counterparts in dwarfs, they provide new insight. Their helium abundances are typically enriched by less than 50\\% compared to the primordial abundance. Nitrogen abundances always show some level of secondary enrichment, but the absolute enrichment is not extreme. In particular, type I PNe are rare among the bright PNe in local spirals. The oxygen and neon abundances are very well correlated and follow the relation between these abundances observed in star-forming galaxies, implying that either the progenitor stars of these PNe modify neither abundance substantially or that they modify both to maintain the ratio (not predicted by theory). According to theory, these results imply that the progenitor stars of bright PNe in local spirals have masses of about$2\\,\\mathrm M_{\\odot}$or less. If so, the progenitors of these PNe have substantial lifetimes that allow us to use them to study the recent... 19. Chemical Abundance Gradients in the Star-forming Ring Galaxies Korchagin, Vladimir; Vorobyov, Eduard; Mayya, Y. D. 1999-09-01 Ring waves of star formation, propagating outward in the galactic disks, leave chemical abundance gradients in their wakes. We show that the relative [Fe/O] abundance gradients in ring galaxies can be used as a tool for determining the role of the SN Ia explosions in their chemical enrichment. We consider two mechanisms--a self-induced wave and a density wave--that can create outwardly propagating star-forming rings in a purely gaseous disk and demonstrate that the radial distribution of the relative [Fe/O] abundance gradients depends neither on the particular mechanism of the wave formation anor on the parameters of the star-forming process. We show that the [Fe/O] profile is determined by the velocity of the wave, the initial mass function, and the initial chemical composition of the star-forming gas. If the role of SN Ia explosions is negligible in the chemical enrichment, the ratio [Fe/O] remains constant throughout the galactic disk with a steep gradient at the wave front. If SN Ia stars are important in the production of cosmic iron, the [Fe/O] ratio has a gradient in the wake of the star-forming wave with the value depending on the frequency of SN Ia explosions. 20. Abundance analysis of HD 22920 spectra Khalack, Viktor 2015-01-01 The new spectropolarimetric observations of HD 22920 with ESPaDOnS at CFHT reveal a strong variability of its spectral line profiles with the phase of stellar rotation. We have obtained Teff = 13640 K, logg=3.72 for this star from the best fit of its nine Balmer line profiles. The respective model of stellar atmosphere was calculated to perform abundance analysis of HD 22920 using the spectra obtained for three different phases of stellar rotation. We have found that silicon and chromium abundances appear to be vertically stratified in the atmosphere of HD 22920. Meanwhile, silicon shows hints for a possible variability of vertical abundance stratification with rotational phase. 1. Detailed chemical abundances of extragalactic globular clusters using high resolution, integrated light spectra Colucci, Janet E. Globular clusters (GCs) are luminous, observationally accessible objects that are good tracers of the total star formation and evolutionary history of galaxies. We present the first detailed chemical abundances for GCs in M31 using a new abundance analysis technique designed for high resolution, integrated light (IL) spectra of GCs. This technique has recently been developed using a training set of old GCS in the Milky Way (MW), and makes possible detailed chemical evolution studies of distant galaxies, where high resolution abundance analysis of individual stars are not obtainable. For the 5 M31 GCs presented here, we measure abundances of 14 elements: Mg, Al, Si, Ca, Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Y, and Ba. We find the M31 GCs have ages (>10 Gyr) and chemical properties similar to MW GCs, including an enhancement in the alpha-elements Ca, Ti and Si of [alpha/Fe]˜ +0.4. In this thesis, we also further develop this IL abundance analysis method to include GCs of ages 10 Myr--12 Gyrs using GCs in the Large Magellanic Cloud (LMC), which contains the necessary sample of clusters over this wide age range. This work demonstrates for the first time that this IL abundance analysis method can be used on clusters of all ages, and that ages can be constrained to within 1--2 Gyr for clusters with ages of ˜2 Gyr and within a few 100 Myr for clusters with ages chemical abundances of 22 elements are reported for six LMC clusters. 2. Cosmic rays interactions and the abundances of the chemical elements Our Galaxy is the largest nuclear interaction experiment which we know, because of the interaction between cosmic ray particles and the interstellar material. Cosmic rays are particles, which have been accelerated in the Galaxy or in extragalactic space. Cosmic rays come as protons, electrons, heavier nuclei, and their antiparticles. Up to energies up to some tens of TeV of particle energy it is possible to derive chemical abundances of cosmic rays. It has been proposed that cosmic ray particles can be attributed to three main sites of origin and acceleration, a) supernova shocks in the interstellar medium, b) supernova shocks in a stellar wind of the predecessor star, and c) powerful radio galaxies. This proposal leads to quantitative tests, which are encouraging so far. Quantitative models for transport and interaction appear to be consistent with the data. Li, Be, B are secondary in cosmic rays, as are many of the odd-Z elements, as well as the sub-Fe elements. At very low energies, cosmic ray particles are subject to ionization losses, which produce a steep low energy cutoff; all particles below the cutoff are moved into the thermal material population, and the particles above it remain as cosmic rays. This then changes the chemical abundances in the interstellar medium, and is a dominant process for many isotopes of Li, Be, B. With a quantitative theory for the origin of cosmic rays proposed, it appears worthwhile to search for yet better spallation cross sections, especially near threshold. With such an improved set of cross sections, the theory of the interstellar medium and its chemical abundances, both in thermal and in energetic particles, could be taken a large step forward. (author) 3. Chemical abundances in the old LMC globular cluster Hodge 11 Mateluna, R.; Geisler, D.; Villanova, S.; Carraro, G.; Grocholski, A.; Sarajedini, A.; Cole, A.; Smith, V. 2012-12-01 Context. The study of globular clusters is one of the most powerful ways to learn about a galaxy's chemical evolution and star formation history. They preserve a record of chemical abundances at the time of their formation and are relatively easy to age date. The most detailed knowledge of the chemistry of a star is given by high resolution spectroscopy, which provides accurate abundances for a wide variety of elements, yielding a wealth of information on the various processes involved in the cluster's chemical evolution. Aims: We studied red giant branch (RGB) stars in an old, metal-poor globular cluster of the Large Magellanic Cloud (LMC), Hodge 11 (H11), in order to measure as many elements as possible. The goal is to compare its chemical trends to those in the Milky Way halo and dwarf spheroidal galaxies in order to help understand the formation history of the LMC and our own Galaxy. Methods: We have obtained high resolution VLT/FLAMES spectra of eight RGB stars in H11. The spectral range allowed us to measure a variety of elements, including Fe, Mg, Ca, Ti, Si, Na, O, Ni, Cr, Sc, Mn, Co, Zn, Ba, La, Eu and Y. Results: We derived a mean [Fe/H] = -2.00 ± 0.04, in the middle of previous determinations. We found low [α/Fe] abundances for our targets, more comparable to values found in dwarf spheroidal galaxies than in the Galactic halo, suggesting that if H11 is representative of its ancient populations then the LMC does not represent a good halo building block. Our [Ca/Fe] value is about 0.3 dex less than that of halo stars used to calibrate the Ca IR triplet technique for deriving metallicity. A hint of a Na abundance spread is observed. Its stars lie at the extreme high O, low Na end of the Na:O anti-correlation displayed by Galactic and LMC globular clusters. Based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere, Chile (proposal ID 082.B-0458).Table 4 is only available in electronic form at http://www.aanda.org 4. Chemical abundances of blue straggler stars in Galactic Globular Clusters Lovisi, L 2014-01-01 By using the high resolution spectrograph FLAMES@VLT we performed the first systematic campaign devoted to measure chemical abundances of blue straggler stars (BSSs). These stars, whose existence is not predicted by the canonical stellar evolutionary theory, are likely the product of the interactions between stars in the dense environment of Globular Clusters. Two main scenarios for BSS formation (mass transfer in binary systems and stellar collisions) have been proposed and hydrodynamical simulations predict different chemical patterns in the two cases, in particular C and O depletion for mass transfer BSSs. In this contribution, the main results for BSS samples in 6 Globular Clusters and their interpretation in terms of BSS formation processes are discussed. For the first time, evidence of radiative levitation in the shallow envelopes of BSSs hotter than$\\sim$8000 K has been found. C and O depletion for some BSSs has been detected in 47 Tucanae, M30 and$\\omega$Centauri thus suggesting a mass transfer ori... 5. Ages and chemical abundances in dwarf spheroidal galaxies Smecker-Hane, T A; Smecker-Hane, Tammy; William, Andrew Mc 1999-01-01 The dwarf spheroidal galaxies (dSphs) in the Local Group are excellent systems on which we can test theories of galaxy formation and evolution. Color-magnitude diagrams (CMDs) containing many thousands of stars from the asymptotic giant branch to well below the oldest main-sequence turnoff are being used to infer their star-formation histories, and surprisingly complex evolutionary histories have been deduced. Spectroscopy of individual red giant stars in the dSphs is being used to determine the distribution of chemical abundances in them. By combining photometry and spectroscopy, we can overcome the age-metallicity degeneracy inherent in CMDs and determine the evolution of dSphs with unprecedented accuracy. We report on recent progress and discuss a new and exciting avenue of research, high-dispersion spectroscopy that yields abundances for numerous chemical elements. The later allows us to estimate the enrichment from both Type Ia and Type II supernovae (SNe) and places new limits on how much of the Galaxy ... 6. The chemical composition of red giants in 47 Tucanae. I. Fundamental parameters and chemical abundance patterns Thygesen, A. O.; Sbordone, L.; Andrievsky, S.; Korotin, S.; Yong, D.; Zaggia, S.; Ludwig, H.-G.; Collet, R.; Asplund, M.; Ventura, P.; D'Antona, F.; Meléndez, J.; D'Ercole, A. 2014-12-01 Context. The study of chemical abundance patterns in globular clusters is key importance to constraining the different candidates for intracluster pollution of light elements. Aims: We aim at deriving accurate abundances for a wide range of elements in the globular cluster 47 Tucanae (NGC 104) to add new constraints to the pollution scenarios for this particular cluster, expanding the range of previously derived element abundances. Methods: Using tailored 1D local thermodynamic equilibrium (LTE) atmospheric models, together with a combination of equivalent width measurements, LTE, and NLTE synthesis, we derive stellar parameters and element abundances from high-resolution, high signal-to-noise spectra of 13 red giant stars near the tip of the RGB. Results: We derive abundances of a total 27 elements (O, Na, Mg, Al, Si, Ca, Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, Y, Zr, Mo, Ru, Ba, La, Ce, Pr, Nd, Eu, Dy). Departures from LTE were taken into account for Na, Al, and Ba. We find a mean [Fe/H] = -0.78 ± 0.07 and [ α/ Fe ] = 0.34 ± 0.03 in good agreement with previous studies. The remaining elements show good agreement with the literature, but including NLTE for Al has a significant impact on the behavior of this key element. Conclusions: We confirm the presence of an Na-O anti-correlation in 47 Tucanae found by several other works. Our NLTE analysis of Al shifts the [Al/Fe] to lower values, indicating that this may be overestimated in earlier works. No evidence of an intrinsic variation is found in any of the remaining elements. Based on observations made with the ESO Very Large Telescope at Paranal Observatory, Chile (Programmes 084.B-0810 and 086.B-0237).Full Tables 2, 5, and 9 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/572/A108Appendix A is available in electronic form at http://www.aanda.org 7. Chemical homogeneity in the Orion Association: Oxygen abundances of B stars Lanz T. 2012-02-01 Full Text Available We present non-LTE oxygen abundances for a sample of B stars in the Orion association. The abundance calculations included non-LTE line formation and used fully blanketed non-LTE model atmospheres. The stellar parameters were the same as adopted in the previous study by Cunha & Lambert (1994. We find that the young Orion stars in this sample of 10 stars are described by a single oxygen abundance with an average value of A(O = 8.78 and a small dispersion of ±0.05, dex which is of the order of the uncertainties in the analysis. This average oxygen abundance compares well with the average oxygen abundance obtained previously in Cunha & Lambert (1994: A(O = 8.72 ± 0.13 although this earlier study, based upon non-blanketed model atmospheres in LTE, displayed larger scatter. Small scatter of chemical abundances in Orion B stars had also been found in our previous studies for neon and argon; all based on the same effective temperature scale. The derived oxygen abundance distribution for the Orion association compares well with other results for the oxygen abundance in the solar neighborhood. 8. Abundance analysis of s-process enhanced barium stars Mahanta, Upakul; Karinkuzhi, Drisya; Goswami, Aruna; Duorah, Kalpana 2016-08-01 Detailed chemical composition studies of stars with enhanced abundances of neutron-capture elements can provide observational constraints for neutron-capture nucleosynthesis studies and clues for understanding their contribution to the Galactic chemical enrichment. We present abundance results from high-resolution spectral analyses of a sample of four chemically peculiar stars characterized by s-process enhancement. High-Resolution spectra (R ˜42000) of these objects spanning a wavelength range from 4000 to 6800 Å, are taken from the ELODIE archive. We have estimated the stellar atmospheric parameters, the effective temperature Teff, the surface gravity log g, and metallicity [Fe/H] from local thermodynamic equilibrium analysis using model atmospheres. We report estimates of elemental abundances for several neutron-capture elements, Sr, Y, Zr, Ba, La, Ce, Pr, Nd, Sm, Eu and Dy. While HD 49641 and HD 58368 show [Ba/Fe] ≥ 1.16 the other two objects HD 119650 and HD 191010 are found to be mild barium stars with [Ba/Fe] ˜ 0.4. The derived abundances of the elements are interpreted on the basis of existing theories for understanding their origin and evolution. 9. Chemical Security Analysis Center Federal Laboratory Consortium — In 2006, by Presidential Directive, DHS established the Chemical Security Analysis Center (CSAC) to identify and assess chemical threats and vulnerabilities in the... 10. Automatic abundance analysis of high resolution spectra Bonifacio, P; Bonifacio, Piercarlo; Caffau, Elisabetta 2003-01-01 We describe an automatic procedure for determining abundances from high resolution spectra. Such procedures are becoming increasingly important as large amounts of data are delivered from 8m telescopes and their high-multiplexing fiber facilities, such as FLAMES on ESO-VLT. The present procedure is specifically targeted for the analysis of spectra of giants in the Sgr dSph; however, the procedure may be, in principle, tailored to analyse stars of any type. Emphasis is placed on the algorithms and on the stability of the method; the external accuracy rests, ultimately, on the reliability of the theoretical models (model-atmospheres, synthetic spectra) used to interpret the data. Comparison of the results of the procedure with the results of a traditional analysis for 12 Sgr giants shows that abundances accurate at the level of 0.2 dex, comparable with that of traditional analysis of the same spectra, may be derived in a fast and efficient way. Such automatic procedures are not meant to replace the traditional ... 11. Abundance analysis of s-process enhanced barium stars Mahanta, Upakul; Goswami, Aruna; Duorah, Kalpana 2016-01-01 Detailed chemical composition studies of stars with enhanced abundances of neutron-capture elements can provide observational constraints for neutron-capture nucleosynthesis studies and clues for understanding their contribution to the Galactic chemical enrichment. We present abundance results from high-resolution spectral analyses of a sample of four chemically peculiar stars characterized by s-process enhancement. High-Resolution spectra (R ~ 42000) of these objects spanning a wavelength range from 4000 to 6800 A, are taken from the ELODIE archive. We have estimated the stellar atmospheric parameters, the effective temperature T_eff, the surface gravity log g, and metallicity [Fe/H] from local thermodynamic equilibrium analysis using model atmospheres. We report estimates of elemental abundances for several neutron-capture elements, Sr, Y, Zr, Ba, La, Ce, Pr, Nd, Sm, Eu and Dy. While HD 49641 and HD 58368 show [Ba/Fe] > 1.16 the other two objects HD 119650 and HD 191010 are found to be mild barium stars wit... 12. Chemical Abundances of the Outer Halo Stars in the Milky Way Ishigaki, M; Aoki, W 2009-01-01 We present chemical abundances of 57 metal-poor stars that are likely constituents of the outer stellar halo in the Milky Way. Almost all of the sample stars have an orbit reaching a maximum vertical distance (Z_max) of >5 kpc above and below the Galactic plane. High-resolution, high signal-to-noise spectra for the sample stars obtained with Subaru/HDS are used to derive chemical abundances of Na, Mg, Ca, Ti, Cr, Mn, Fe, Ni, Zn, Y and Ba with an LTE abundance analysis code. The resulting abundance data are combined with those presented in literature that mostly targeted at smaller Z_max stars, and both data are used to investigate any systematic trends in detailed abundance patterns depending on their kinematics. It is shown that, in the metallicity range of -25 kpc are systematically lower (~0.1 dex) than those with smaller Z_max. This result of the lower [alpha/Fe] for the assumed outer halo stars is consistent with previous studies that found a signature of lower [alpha/Fe] ratios for stars with extreme ki... 13. Chemical Abundances in a Sample of Red Giants in the Open Cluster NGC 2420 from APOGEE Souto, Diogo; Smith, Verne; Prieto, Carlos Allende; Pinsonneault, Marc; Zamora, Olga; García-Hernández, D Anibal; Bovy, Szabolcs Meszaros Jo; Pérez, Ana Elia García; Anders, Friedrich; Bizyaev, Dmitry; Carrera, Ricardo; Frinchaboy, Peter; Holtzman, Jon; Ivans, Inese; Majewski, Steve; Shetrone, Matthew; Sobeck, Jennifer; Pan, Kaike; Tang, Baitian; Villanova, Sandro; Geisler, Douglas 2016-01-01 NGC 2420 is a$\\sim$2 Gyr-old well-populated open cluster that lies about 2 kpc beyond the solar circle, in the general direction of the Galactic anti-center. Most previous abundance studies have found this cluster to be mildly metal-poor, but with a large scatter in the obtained metallicities for this open cluster. Detailed chemical abundance distributions are derived for 12 red-giant members of NGC 2420 via a manual abundance analysis of high-resolution (R = 22,500) near-infrared ($\\lambda$1.5 - 1.7$\\mu$m) spectra obtained from the Apache Point Observatory Galactic Evolution Experiment (APOGEE) survey. The sample analyzed contains 6 stars that are identified as members of the first-ascent red giant branch (RGB), as well as 6 members of the red clump (RC). We find small scatter in the star-to-star abundances in NGC 2420, with a mean cluster abundance of [Fe/H] = -0.16$\\pm$0.04 for the 12 red giants. The internal abundance dispersion for all elements (C, N, O, Na, Mg, Al, Si, K, Ca, Ti, V, Cr, Mn, Co and Ni... 14. Multidimensional Chemical Modeling. III. Abundance and excitation of diatomic hydrides Bruderer, Simon; Stäuber, P; Doty, Steven D 2010-01-01 The Herschel Space Observatory opens the sky for observations in the far infrared at high spectral and spatial resolution. A particular class of molecules will be directly observable; light diatomic hydrides and their ions (CH, OH, SH, NH, CH+, OH+, SH+, NH+). These simple constituents are important both for the chemical evolution of the region and as tracers of high-energy radiation. If outflows of a forming star erode cavities in the envelope, protostellar far UV (FUV; 6 100 K) for water ice to evaporate. If the cavity shape allows FUV radiation to penetrate this hot-core region, the abundance of FUV destroyed species (e.g. water) is decreased. In particular, diatomic hydrides and their ions CH$+, OH+ and NH+ are enhanced by many orders of magnitude in the outflow walls due to the combination of high gas temperatures and rapid photodissociation of more saturated species. The enhancement of these diatomic hydrides is sufficient for a detection using the HIFI and PACS instruments onboard Herschel. The effect... 15. Chemical tagging can work: Identification of stellar phase-space structures purely by chemical-abundance similarity Hogg, David W; Ness, Melissa; Rix, Hans-Walter; Foreman-Mackey, Daniel 2016-01-01 Chemical tagging promises to use detailed abundance measurements to identify spatially separated stars that were in fact born together (in the same molecular cloud), long ago. This idea has not previously yielded scientific successes, probably because of the noise and incompleteness in chemical-abundance measurements. However, we have succeeded in substantially improving spectroscopic measurements with The Cannon, which has delivered 15 individual abundances for 100,000 stars observed as part of the APOGEE spectroscopic survey, with precisions around 0.04 dex. We test the chemical-tagging hypothesis by looking at clusters in abundance space and confirming that they are clustered in phase space. We identify (by the k-means algorithm) overdensities of stars in the 15-dimensional chemical-abundance space delivered by The Cannon, and plot the associated stars in phase space. We use only abundance-space information (no positional information) to identify stellar groups. We find that clusters in abundance space are... 16. Chemical abundances of 1111 FGK stars from the HARPS GTO planet search program Adibekyan, V Zh; Santos, N C; Mena, E Delgado; Hernandez, J I Gonzalez; Israelian, G; Mayor, M; Khachatryan, G 2012-01-01 We performed a uniform and detailed abundance analysis of 12 refractory elements (Na, Mg, Al, Si, Ca, Ti, Cr, Ni, Co, Sc, Mn and V) for a sample of 1111 FGK dwarf stars from the HARPS GTO planet search program. 109 of these stars are known to harbour giant planetary companions and 26 stars are hosting exclusively Neptunians and super-Earths. The main goals of this paper are i) to investigate whether there are any differences between the elemental abundance trends for stars of different stellar populations; ii) to characterise the planet host and non-host samples in term of their [X/H]. The extensive study of this sample, focused on the abundance differences between stars with and without planets will be presented in a parallel paper. The equivalent widths of spectral lines are automatically measured from HARPS spectra with the ARES code. The abundances of the chemical elements are determined using a LTE abundance analysis relative to the Sun, with the 2010 revised version of the spectral synthesis code MOOG a... 17. The chemical composition of red giants in 47 Tucanae I: Fundamental parameters and chemical abundance patterns Thygesen, A O; Andrievsky, S; Korotin, S; Yong, D; Zaggia, S; Ludwig, H -G; Collet, R; Asplund, M; D'Antona, F; Meléndez, J; D'Ercole, A 2014-01-01 Context: The study of chemical abundance patterns in globular clusters is of key importance to constrain the different candidates for intra-cluster pollution of light elements. Aims: We aim at deriving accurate abundances for a large range of elements in the globular cluster 47 Tucanae (NGC 104) to add new constraints to the pollution scenarios for this particular cluster, expanding the range of previously derived element abundances. Methods: Using tailored 1D LTE atmospheric models together with a combination of equivalent width measurements, LTE, and NLTE synthesis we derive stellar parameters and element abundances from high-resolution, high signal-to-noise spectra of 13 red giant stars near the tip of the RGB. Results: We derive abundances of a total 27 elements (O, Na, Mg, Al, Si, Ca, Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, Y, Zr, Mo, Ru, Ba, La, Ce, Pr, Nd, Eu, Dy). Departures from LTE were taken into account for Na, Al and Ba. We find a mean [Fe/H] = $-0.78\\pm0.07$ and $[\\alpha/{\\rm Fe}]=0.34\\pm0.03$ in... 18. Chemical abundances in a high velocity RR Lyrae star near the bulge Hansen, Camilla Juul; Koch, Andreas; Xu, Siyi; Kunder, Andrea; Ludwig, Hans-Guenter 2016-01-01 Low-mass, variable, high-velocity stars are interesting study cases for many aspects of Galactic structure and evolution. Until recently, the only known high- or hyper-velocity stars were young stars thought to originate from the Galactic centre. Wide-area surveys like APOGEE and BRAVA have found several low-mass stars in the bulge with Galactic rest-frame velocities larger than 350 km/s. In this study we present the first abundance analysis of a low-mass, RR Lyrae star, located close to the Galactic bulge, with a space motion of ~ -400 km/s. Using medium-resolution spectra, we derive abundances (including upper limits) of 11 elements. These allow us to chemically tag the star and discuss its origin, although our derived abundances and metallicity, at [Fe/H] =-0.9 dex, do not point toward one unambiguous answer. Based on the chemical tagging, we cannot exclude that it originated in the bulge. However, combining its retrograde orbit and the derived abundances suggests that the star was accelerated from the out... 19. A Chemical Abundance Study of 10 Open Clusters Based on WIYN-Hydra Spectroscopy Jacobson, H R; Friel, E D 2011-01-01 We present a detailed chemical abundance study of evolved stars in 10 open clusters based on Hydra multi-object echelle spectra obtained with the WIYN 3.5m telescope. From an analysis of both equivalent widths and spectrum synthesis, abundances have been determined for the elements Fe, Na, O, Mg, Si, Ca, Ti, Ni, Zr, and for two of the 10 clusters, Al and Cr. To our knowledge, this is the first detailed abundance analysis for clusters NGC 1245, NGC 2194, NGC 2355 and NGC 2425. These 10 clusters were selected for analysis because they span a Galactocentric distance range Rgc~9-13 kpc, the approximate location of the transition between the inner and outer disk. Combined with cluster samples from our previous work and those of other studies in the literature, we explore abundance trends as a function of cluster Rgc, age, and [Fe/H]. The [Fe/H] distribution appears to decrease with increasing Rgc to a distance of ~12 kpc, and then flattens to a roughly constant value in the outer disk. Cluster average element [X/F... 20. Chemical Abundances in our Galaxy and Other Galaxies Derived from H II Regions Peimbert, M.; L. Carigi; Peimbert, A. 2000-01-01 We discuss the accuracy of the abundance determinations of H II regions in our Galaxy and other galaxies. We focus on the main observational constraints derived from abundance determinations that have implications for models of galactic chemical evolution: a) the helium to hydrogen abundance ratio, He/H; b) the oxygen to hydrogen abundance ratio, O/H; c) the carbon to oxygen abundance ratio, C/O; d) the helium to oxygen and helium to heavy elements abundance ratios, Delta Y/ Delta O and Delta... 1. Determining stellar atmospheric parameters and chemical abundances of FGK stars with iSpec Blanco-Cuaresma, S; Heiter, U; Jofré, P 2014-01-01 Context. An increasing number of high-resolution stellar spectra is available today thanks to many past and ongoing extensive spectroscopic surveys. Consequently, the scientific community needs automatic procedures to derive atmospheric parameters and individual element abundances. Aims. Based on the widely known SPECTRUM code by R. O. Gray, we developed an integrated spectroscopic software framework suitable for the determination of atmospheric parameters (i.e., effective temperature, surface gravity, metallicity) and individual chemical abundances. The code, named iSpec and freely distributed, is written mainly in Python and can be used on different platforms. Methods. iSpec can derive atmospheric parameters by using the synthetic spectral fitting technique and the equivalent width method. We validated the performance of both approaches by developing two different pipelines and analyzing the Gaia FGK benchmark stars spectral library. The analysis was complemented with several tests designed to assess other ... 2. Probing the chemical abundances in distant galaxies with 10 m class telescopes Contini, T. 2003-01-01 The determination of chemical abundances in star-forming galaxies and the study of their evolution on cosmological timescales are powerful tools for understanding galaxy formation and evolution. This contribution presents the latest results in this domain. We show that detailed studies of chemical abundances in UV-selected, HII and starburst nucleus galaxies, together with the development of new chemical evolution models, put strong constraints on the evolutionary stage of these objects in te... 3. Chemical exchange program analysis. Waffelaert, Pascale 2007-09-01 As part of its EMS, Sandia performs an annual environmental aspects/impacts analysis. The purpose of this analysis is to identify the environmental aspects associated with Sandia's activities, products, and services and the potential environmental impacts associated with those aspects. Division and environmental programs established objectives and targets based on the environmental aspects associated with their operations. In 2007 the most significant aspect identified was Hazardous Materials (Use and Storage). The objective for Hazardous Materials (Use and Storage) was to improve chemical handling, storage, and on-site movement of hazardous materials. One of the targets supporting this objective was to develop an effective chemical exchange program, making a business case for it in FY07, and fully implementing a comprehensive chemical exchange program in FY08. A Chemical Exchange Program (CEP) team was formed to implement this target. The team consists of representatives from the Chemical Information System (CIS), Pollution Prevention (P2), the HWMF, Procurement and the Environmental Management System (EMS). The CEP Team performed benchmarking and conducted a life-cycle analysis of the current management of chemicals at SNL/NM and compared it to Chemical Exchange alternatives. Those alternatives are as follows: (1) Revive the 'Virtual' Chemical Exchange Program; (2) Re-implement a 'Physical' Chemical Exchange Program using a Chemical Information System; and (3) Transition to a Chemical Management Services System. The analysis and benchmarking study shows that the present management of chemicals at SNL/NM is significantly disjointed and a life-cycle or 'Cradle-to-Grave' approach to chemical management is needed. This approach must consider the purchasing and maintenance costs as well as the cost of ultimate disposal of the chemicals and materials. A chemical exchange is needed as a mechanism to re-apply chemicals on site. This 4. Chemical Abundance Patterns and the Early Environment of Dwarf Galaxies Corlies, Lauren; Tumlinson, Jason; Bryan, Greg 2013-01-01 Recent observations suggest that abundance pattern differences exist between low metallicity stars in the Milky Way stellar halo and those in the dwarf satellite galaxies. This paper takes a first look at what role the early environment for pre-galactic star formation might have played in shaping these stellar populations. In particular, we consider whether differences in cross-pollution between the progenitors of the stellar halo and the satellites could help to explain the differences in abundance patterns. Using an N-body simulation, we find that the progenitor halos of the main halo are primarily clustered together at z=10 while the progenitors of the satellite galaxies remain on the outskirts of this cluster. Next, analytically modeled supernova-driven winds show that main halo progenitors cross-pollute each other more effectively while satellite galaxy progenitors remain more isolated. Thus, inhomogeneous cross-pollution as a result of different high-z spatial locations of each system's progenitors can ... 5. Chemical abundances of giant stars in the Crater stellar system Bonifacio, P; Zaggia, S; François, P; Sbordone, L; Andrievsky, S M; Korotin, S A 2015-01-01 We obtained spectra for two giants of Crater (Crater J113613-105227 and Crater J113615-105244) using X-Shooter at the VLT. The spectra have been analysed with the MyGIsFoS code using a grid of synthetic spectra computed from one dimensional, Local Thermodynamic Equilibrium (LTE) model atmospheres. Effective temperature and surface gravity have been derived from photometry measured from images obtained by the Dark Energy Survey. The radial velocities are 144.3+-4.0 km/s for Crater J113613-105227 and and 134.1+-4.0 km/s for Crater J113615-105244. The metallicities are [Fe/H]=-1.73 and [Fe/H]=-1.67, respectively. Beside the iron abundance we could determine abundances for nine elements: Na, Mg, Ca, Ti, V, Cr, Mn, Ni and Ba. For Na and Ba we took into account deviations from LTE, since the corrections are significant. The abundance ratios are similar in the two stars and resemble those of Galactic stars of the same metallicity. On the deep photometric images we could detect several stars that lie to the blue of t... 6. Young stars and ionized nebulae in M83: comparing chemical abundances at high metallicity Bresolin, Fabio; Urbaneja, Miguel A; Gieren, Wolfgang; Ho, I-Ting; Pietrzynski, Grzegorz 2016-01-01 We present spectra of 14 A-type supergiants in the metal-rich spiral galaxy M83. We derive stellar parameters and metallicities, and measure a spectroscopic distance modulus m-M = 28.47 +\\- 0.10 (4.9 +\\- 0.2 Mpc), in agreement with other methods. We use the stellar characteristic metallicity of M83 and other systems to discuss a version of the galaxy mass-metallicity relation that is independent of the analysis of nebular emission lines and the associated systematic uncertainties. We reproduce the radial metallicity gradient of M83, which flattens at large radii, with a chemical evolution model, constraining gas inflow and outflow processes. We carry out a comparative analysis of the metallicities we derive from the stellar spectra and published HII region line fluxes, utilizing both the direct, Te-based method and different strong-line abundance diagnostics. The direct abundances are in relatively good agreement with the stellar metallicities, once we apply a modest correction to the nebular oxygen abundance... 7. Chemical abundances in a high-velocity RR Lyrae star near the bulge Hansen, C. J.; Rich, R. M.; Koch, A.; Xu, S.; Kunder, A.; Ludwig, H.-G. 2016-05-01 Low-mass variable high-velocity stars are interesting study cases for many aspects of Galactic structure and evolution. Until recently, the only known high- or hyper-velocity stars were young stars thought to originate from the Galactic center. Wide-area surveys such as APOGEE and BRAVA have found several low-mass stars in the bulge with Galactic rest-frame velocities higher than 350 km s-1. In this study we present the first abundance analysis of a low-mass RR Lyrae star that is located close to the Galactic bulge, with a space motion of ~-400 km s-1. Using medium-resolution spectra, we derived abundances (including upper limits) of 11 elements. These allowed us to chemically tag the star and discuss its origin, although our derived abundances and metallicity, at [Fe/H] =-0.9 dex, do not point toward one unambiguous answer. Based on the chemical tagging, we cannot exclude that it originated in the bulge. However, its retrograde orbit and the derived abundances combined suggest that the star was accelerated from the outskirts of the inner (or even outer) halo during many-body interactions. Other possible origins include the bulge itself, or the star might have been stripped from a stellar cluster or the Sagittarius dwarf galaxy when it merged with the Milky Way. The data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation. 8. An accurate and self-consistent chemical abundance catalogue for the APOGEE/Kepler sample Hawkins, Keith; Jofre, Paula; Gilmore, Gerry; Elsworth, Yvonne; Hekker, Saskia 2016-01-01 Context. The APOGEE survey has obtained high-resolution infrared spectra of more than 100,000 stars. Deriving chemical abundances patterns of these stars is paramount to piecing together the structure of the Milky Way. While the derived chemical abundances have been shown to be precise for most stars, some calibration problems have been reported, in particular for more metal- poor stars. Aims. In this paper, we aim to (1) re-determine the chemical abundances of the APOGEE+Kepler stellar sample (APOKASC) with an independent procedure, line list and line selection, and high quality surface gravity information from astroseismology, and (2) extend the abundance catalogue by including abundances that are not currently reported in the most recent APOGEE release (DR12). Methods. We fixed the Teff and log g to those determined using spectrophotometric and asteroseismic techniques, respectively. We made use of the Brussels Automatic Stellar Parameter (BACCHUS) code to derive the metallicity and broadening parameters f... 9. A CHEMICAL ABUNDANCE STUDY OF 10 OPEN CLUSTERS BASED ON WIYN -HYDRA SPECTROSCOPY We present a detailed chemical abundance study of evolved stars in 10 open clusters based on Hydra multi-object echelle spectra obtained with the WIYN 3.5 m telescope. From an analysis of both equivalent widths and spectrum synthesis, abundances have been determined for the elements Fe, Na, O, Mg, Si, Ca, Ti, Ni, Zr, and for two of the 10 clusters, Al and Cr. To our knowledge, this is the first detailed abundance analysis for clusters NGC 1245, NGC 2194, NGC 2355, and NGC 2425. These 10 clusters were selected for analysis because they span a Galactocentric distance range Rgc ∼ 9-13 kpc, the approximate location of the transition between the inner and outer disks. Combined with cluster samples from our previous work and those of other studies in the literature, we explore abundance trends as a function of cluster Rgc, age, and [Fe/H]. As found previously by us and other studies, the [Fe/H] distribution appears to decrease with increasing Rgc to a distance of ∼12 kpc and then flattens to a roughly constant value in the outer disk. Cluster average element [X/Fe] ratios appear to be independent of Rgc, although the picture for [O/Fe] is more complicated with a clear trend of [O/Fe] with [Fe/H] and sample incompleteness. Other than oxygen, no other element [X/Fe] exhibits a clear trend with [Fe/H]; likewise, there does not appear to be any strong correlation between abundance and cluster age. We divided clusters into different age bins to explore temporal variations in the radial element distributions. The radial metallicity gradient appears to have flattened slightly as a function of time, as found by other studies. There is also some indication that the transition from the inner disk metallicity gradient to the ∼constant [Fe/H] distribution of the outer disk occurs at different Galactocentric radii for different age bins. However, interpretation of the time evolution of radial abundance distributions is complicated by the unequal Rgc and [Fe/H] ranges spanned by 10. Effect of chemical fertilization and green manure on the abundance and community structure of ammonia oxidizers in a paddy soil Yu Fang 2015-12-01 Full Text Available Ammonia oxidization is a critical step in the soil N cycle and can be affected by the fertilization regimes. Chinese milk-vetch (Astragalus sinicus L., MV is a major green manure of rice (Oryza sativa L. fields in southern China, which is recommended as an important agronomic practice to improve soil fertility. Soil chemical properties, abundance and community structures of ammonia-oxidizing bacteria (AOB and ammonia-oxidizing archaea (AOA in a MV-rice rotation field under different fertilization regimes were investigated. The field experiment included six treatments: control, without MV and chemical fertilizer (CK; 100% chemical fertilizer (NPK; 18 000 kg MV ha-1 plus 100% chemical fertilizer (NPKM1; 18 000 kg MV ha-1 plus 40% chemical fertilizer (NPKM2; 18 000 kg MV ha-1 alone (MV; and 18 000 kg MV ha-1 plus 40% chemical fertilizer plus straw (NPKMS. Results showed that NPKMS treatment could improve the soil fertility greatly although the application of 60% chemical fertilizer. The abundance of AOB only in the MV treatment had significant difference with the control; AOA were more abundant than AOB in all corresponding treatments. The NPKMS treatment had the highest AOA abundance (1.19 x 10(8 amoA gene copies g-1 and the lowest abundance was recorded in the CK treatment (3.21 x 10(7 amoA gene copies g-1. The abundance of AOA was significantly positively related to total N, available N, NH4+-N, and NO3--N. The community structure of AOA exhibited little variation among different fertilization regimes, whereas the community structure of AOB was highly responsive. Phylogenetic analysis showed that all AOB sequences were affiliated with Nitrosospira or Nitrosomonas and all AOA denaturing gradient gel electrophoresis (DGGE bands belonged to the soil and sediment lineage. These findings could be fundamental to improve our understanding of AOB and AOA in the N cycle in the paddy soil. 11. The determination and interpretation of chemical abundances from HII region spectra in galaxies An overview is given of the determination of element abundances from HII region emission lines in external galaxies. The variation of abundances - particularly O, S and N - with type of, and position in a galaxy is discussed. Some aspects of chemical evolution which may have led to these variations are investigated, introducing a ''throughflow'' model to show some effects of gas flow. (author) 12. Symmetric vs. asymmetric planetary nebulae: morphology and chemical abundances Maciel, W J 2010-01-01 We analyse a large sample of galactic planetary nebulae based on their chemical composition and morphology. A recent morphological classification system is adopted, and several elements are considered, namely He, N, O, S, Ar, Ne, and C in order to investigate the correlations involving these elements and the different PN types. Special emphasis is given to the differences between symmetric (round or elliptical) nebulae and those that present some degree of asymmetry (bipolars or bipolar core objects). The results are compared with previous findings both for PN in the Galaxy and in the Magellanic Clouds. 13. Chemical process hazards analysis NONE 1996-02-01 The Office of Worker Health and Safety (EH-5) under the Assistant Secretary for the Environment, Safety and Health of the US Department (DOE) has published two handbooks for use by DOE contractors managing facilities and processes covered by the Occupational Safety and Health Administration (OSHA) Rule for Process Safety Management of Highly Hazardous Chemicals (29 CFR 1910.119), herein referred to as the PSM Rule. The PSM Rule contains an integrated set of chemical process safety management elements designed to prevent chemical releases that can lead to catastrophic fires, explosions, or toxic exposures. The purpose of the two handbooks, Process Safety Management for Highly Hazardous Chemicals and Chemical Process Hazards Analysis, is to facilitate implementation of the provisions of the PSM Rule within the DOE. The purpose of this handbook Chemical Process Hazards Analysis, is to facilitate, within the DOE, the performance of chemical process hazards analyses (PrHAs) as required under the PSM Rule. It provides basic information for the performance of PrHAs, and should not be considered a complete resource on PrHA methods. Likewise, to determine if a facility is covered by the PSM rule, the reader should refer to the handbook, Process Safety Management for Highly Hazardous Chemicals (DOE- HDBK-1101-96). Promulgation of the PSM Rule has heightened the awareness of chemical safety management issues within the DOE. This handbook is intended for use by DOE facilities and processes covered by the PSM rule to facilitate contractor implementation of the PrHA element of the PSM Rule. However, contractors whose facilities and processes not covered by the PSM Rule may also use this handbook as a basis for conducting process hazards analyses as part of their good management practices. This handbook explains the minimum requirements for PrHAs outlined in the PSM Rule. Nowhere have requirements been added beyond what is specifically required by the rule. 14. An abundance analysis for Vega Is it a $\\lambda$ Boo star? Ilijic, S; Dominis, D; Planinic, M; Pavlovski, K 1998-01-01 Since Baschek & Slettebak (1988) drew attention to the similarity between the abundance pattern of lambda Boo stars and that of Vega, there has been a long debate whether Vega should be listed among the chemically peculiar stars of lambda Boo type. We performed an elemental abundance analysis using a high dispersion spectrum in the optical region, and confirmed its mild metal underabundance. In our discussion we reinforce the suggestion that Vega is a mild lambda Boo star. 15. Chemical Abundances of Luminous Cool Stars in the Galactic Center from High-Resolution Infrared Spectroscopy Cunha, Katia; Smith, Verne V; Ramirez, Solange V; Blum, Robert D; Terndrup, Donald M 2007-01-01 We present chemical abundances in a sample of luminous cool stars located within 30 pc of the Galactic Center. Abundances of carbon, nitrogen, oxygen, calcium, and iron were derived from high-resolution infrared spectra in the H- and K-bands. The abundance results indicate that both [O/Fe] and [Ca/Fe] are enhanced respectively by averages of +0.2 and +0.3 dex, relative to either the Sun or the Milky Way disk at near solar Fe abundances. The Galactic Center stars show a nearly uniform and nearly solar iron abundance. The mean value of A(Fe) = 7.59 +- 0.06 agrees well with previous work. The total range in Fe abundance among Galactic Center stars, 0.16 dex, is significantly narrower than the iron abundance distributions found in the literature for the older bulge population. Our snapshot of the current-day Fe abundance within 30 pc of the Galactic Center samples stars with an age less than 1 Gyr; a larger sample in time (or space) may find a wider spread in abundances. 16. Eliminating Error in the Chemical Abundance Scale for Extragalactic HII Regions Lopez-Sanchez, Angel R; Kewley, L J; Zahid, H J; Nicholls, D C; Scharwachter, J 2012-01-01 In an attempt to remove the systematic errors which have plagued the calibration of the HII region abundance sequence, we have theoretically modeled the extragalactic HII region sequence. We then used the theoretical spectra so generated in a double blind experiment to recover the chemical abundances using both the classical electron temperature + ionization correction factor technique, and the technique which depends on the use of strong emission lines (SELs) in the nebular spectrum to estimate the abundance of oxygen. We find a number of systematic trends, and we provide correction formulae which should remove systematic errors in the electron temperature + ionization correction factor technique. We also provide a critical evaluation of the various semi-empirical SEL techniques. Finally, we offer a scheme which should help to eliminate systematic errors in the SEL-derived chemical abundance scale for extragalactic HII regions. 17. Statistical analysis from recent abundance determinations in HgMn stars Ghazaryan, S.; Alecian, G. 2016-08-01 To better understand the hot chemically peculiar group of HgMn stars, we have considered a compilation of a large number of recently published data obtained for these stars from spectroscopy. We compare these data to the previous compilation by Smith. We confirm the main trends of the abundance peculiarities, namely the increasing overabundances with increasing atomic number of heavy elements, and their large spread from star to star. For all the measured elements, we have looked for correlations between abundances and effective temperature (Teff). In addition to the known correlation for Mn, some other elements are found to show some connection between their abundances and Teff. We have also checked if multiplicity is a determinant parameter for abundance peculiarities determined for these stars. A statistical analysis using a Kolmogorov-Smirnov test shows that the abundances anomalies in the atmosphere of HgMn stars do not present significant dependence on the multiplicity. 18. Chemical Abundances of the Secondary Star in the Black Hole X-Ray Binary V404 Cygni Hernández, Jonay I González; Rebolo, Rafael; Israelian, Garik; Filippenko, Alexei V; Chornock, Ryan 2011-01-01 We present a chemical abundance analysis of the secondary star in the black hole binary V404 Cygni, using Keck I/HIRES spectra. We adopt a $\\chi^2$-minimization procedure to derive the stellar parameters, taking into account any possible veiling from the accretion disk. With these parameters we determine the atmospheric abundances of O, Na, Mg, Al, Si, Ca, Ti, Fe, and Ni. The abundances of Al, Si, and Ti appear to be slightly enhanced when comparing with average values in thin-disk solar-type stars. The O abundance, derived from optical lines, is particularly enhanced in the atmosphere of the secondary star in V404 Cygni. This, together with the peculiar velocity of this system as compared with the Galactic velocity dispersion of thin-disk stars, suggests that the black hole formed in a supernova or hypernova explosion. We explore different supernova/hypernova models having various geometries to study possible contamination of nucleosynthetic products in the chemical abundance pattern of the secondary star. W... 19. Abundance analysis of the outer halo globular cluster Palomar 14 Caliskan, S; Grebel, K E 2011-01-01 We determine the elemental abundances of nine red giant stars belonging to Palomar 14 (Pal 14). Pal 14 is an outer halo globular cluster (GC) at a distance of \\sim 70 kpc. Our abundance analysis is based on high-resolution spectra and one-dimensional stellar model atmospheres.We derived the abundances for the iron peak elements Sc, V, Cr, Mn, Co, Ni, the {\\alpha}-elements O, Mg, Si, Ca, Ti, the light odd element Na, and the neutron-capture elements Y, Zr, Ba, La, Ce, Nd, Eu, Dy, and Cu. Our data do not permit us to investigate light element (i.e., O to Mg) abundance variations. The neutron-capture elements show an r-process signature. We compare our measurements with the abundance ratios of inner and other outer halo GCs, halo field stars, GCs of recognized extragalactic origin, and stars in dwarf spheroidal galaxies (dSphs). The abundance pattern of Pal 14 is almost identical to those of Pal 3 and Pal 4, the next distant members of the outer halo GC population after Pal 14. The abundance pattern of Pal 14 is... 20. How Does Abundant Display Space Support Data Analysis? Knudsen, Søren This thesis explores information visualizations on large, high-resolution touch displays for analysis of massive amounts of data. The ever increasing rate at which data is collected about everything from peoples’ health, over organisations expenditures, to scientific experiments, necessitates new...... data analysis techniques. Information visualizations on large, high-resolution touch displays is a promising answer to these needs, and provide abundant display space for people to make sense of data. However, little is known about how to tailor interactive visualizations to abundant display space or... The radiometric method of analysis is noted for its sensitivity and its simplicity in both apparatus and procedure. A few inexpensive radioactive reagents permit the analysis of a wide variety of chemical elements and compounds. Any particular procedure is generally applicable over a very wide range of concentrations. It is potentially an analytical method of great industrial significance. Specific examples of analyses are cited to illustrate the potentialities of ordinary equipment. Apparatus specifically designed for radiometric chemistry may shorten the time required, and increase the precision and accuracy for routine analyses. A sensitive and convenient apparatus for the routine performance of radiometric chemical analysis is a special type of centrifuge which has been used in obtaining the data presented in this paper. The radioactivity of the solution is measured while the centrifuge is spinning. This device has been used as the basis for an automatic analyser for phosphate ion, programmed to follow a sequence of unknown sampling, reagent mixing, centrifugation, counting data presentation, and phosphate replenishment. This analyser can repeatedly measure phosphate-concentration in the range of 5 to 50 ppm with an accuracy of ±5%. (author) 2. Chemical Abundances of the Milky Way Thick Disk and Stellar Halo I.: Implications of [alpha/Fe] for Star Formation Histories in Their Progenitors Ishigaki, M N; Aoki, W 2012-01-01 We present the abundance analysis of 97 nearby metal-poor (-3.3-2$. These results favor the scenarios that the MW thick disk formed through rapid chemical enrichment primarily through Type II supernovae of massive stars, while the stellar halo has formed at least in part via accretion of progenitor stellar systems having been chemically enriched with different timescales. 3. Chemical abundances for A-and F-type supergiant stars Molina, R E 2016-01-01 We present the stellar parameters and elemental abundances of a set of A--F-type supergiant stars HD\\,45674, HD\\,180028, HD\\,194951 and HD\\,224893 using high resolution ($R$\\,$\\sim$\\,42,000) spectra taken from ELODIE library. We present the first results of the abundance analysis for HD\\,45674 and HD\\,224893. We reaffirm the abundances for HD\\,180028 and HD\\,194951 studied previously by Luck (2014) respectively. Alpha-elements indicates that objects belong to the thin disc population. From their abundances and its location on the Hertzsprung-Russell diagram seems point out that HD\\,45675, HD\\,194951 and HD\\,224893 are in the post-first dredge-up (post-1DUP) phase and they are moving in the red-blue loop region. HD~180028, on the contary, shows typical abundances of the population I but its evolutionary status could not be satisfactorily defined. 4. High-resolution abundance analysis of HD 140283 Siqueira-Mello, C; Barbuy, B; Spite, M; Spite, F; Korotin, S A 2015-01-01 HD 140283 is a reference subgiant that is metal poor and confirmed to be a very old star. The abundances of this type of old star can constrain the nature and nucleosynthesis processes that occurred in its (even older) progenitors. The present study may shed light on nucleosynthesis processes yielding heavy elements early in the Galaxy. A detailed abundance analysis of a high-quality spectrum is carried out, with the intent of providing a reference on stellar lines and abundances of a very old, metal-poor subgiant. We aim to derive abundances from most available and measurable spectral lines. The analysis is carried out using high-resolution (R = 81 000) and high signal-to-noise ratio (800 < S/N/pixel < 3400) spectrum, in the wavelength range 3700 - 10475, obtained with a seven-hour exposure time, using the ESPaDOnS at the CFHT. The calculations in LTE were performed with the OSMARCS 1D atmospheric model and the spectrum synthesis code Turbospectrum, while the analysis in NLTE is based on the MULTI code... 5. Experiences in Uranium Abundance Analysis Using the MTE Methodology BLACK CLAUDIE KATE; Zuleger, Evelyn; HORTA DOMENECH Joan; VARGAS ZUNIGA Martin 2012-01-01 The ITU in Karlsruhe routinely analyses a multitude of samples from a wide range of internal and external customers for safeguards. High through-put analysis techniques are employed using meticulous care to ensure accurate, precise and timely results are provided. Thermal ionisation mass spectrometry (TIMS) is used for isotopic analysis of Uranium to determine abundance and concentration information. At ITU we employ the modified total evaporation (MTE) methodology[1] for minor isotope an... 6. Chemical abundances of the high-latitude Herbig Ae Star PDS2 Cowley, C R; Przybilla, N 2014-01-01 The Herbig Ae star PDS2 (CD -53 251) is unusual in several ways. It has a high Galactic latitude, unrelated to any known star-forming region. It is at the cool end of the Herbig Ae sequence, where favorable circumstances facilitate the determination of stellar parameters and chemical abundances. We find$T_{\\rm eff} = 6500$K, and$\\log(g) = 3.5$. The relatively low$v\\cdot\\sin(i) = 12\\pm2$\\kms made it possible to use mostly weak lines for the abundances. PDS2 appears to belong to the class of Herbig Ae stars with normal volatile and depleted involatile elements. This pattern is seen not only in$\\lambda$Boo stars, but in some post AGB and RV Tauri stars. The appearance of the same abundance pattern in young stars and highly evolved giants strengthens the hypothesis of gas-grain separation for its origin. The intermediate volatile zinc can violate the pattern of depleted volatiles. 7. Statistical analysis of Fe abundances gradients in the Galaxy CUI; Chenzhou 2001-01-01 [1]Shaver, P. A., McGee, R. X., Newton, L. M. et al., The galactic abundance gradient, MNRAS, 983, 204: 53.[2]Amnuel, P. R., The features of chemical abundances in Galactic planetary nebulae, MNRAS, 993, 26: 263.[3]Maciel, W. J., Kǒppen, J., Abundance gradents from disk planetary nebulae: O, Ne, S and Ar, A&A, 994, 282, 436.[4]Maciel, W. J., Abundance gradients from planetary nebulae in the galactic disk, IAU Samp., 997, 80: 397.[5]Maciel, W. J., Quireza, C., Abundance gradients in the outer galactic disk from planetary nebulae, A&A, 999, 345: 629.[6]Lennon, D. J., Dufton, P. L., Fitzsimmons, A. et al., Dolidze 25: a metal-deficient galactic open cluster, A&A, 990, 240: 349.[7]Fitzsimmons, A., Dufton, P. L., Rolleston, W. R. J., A comparison of oxygen and nitrogen abundances in young clusters and associations and in the interstellar gas, MNRAS, 992, 259: 489.[8]Kilian, J., Montenbruck, O., Nissen, P. E., The galactic distribution of chemical elements as derived from B-stars in open clusters, A&A, 994, 284: 437.[9]Kaufer, A., Szeifert, T., Krenzin, R. et al., The galactic abundance gradients traced by B-type stars, A&A, 994, 289: 740.[10]Smartt, S. J., Dufton, P. L., Rolleston, W. R. J., A metal deficient early B-type star near the edge of the galactic disk, A&A, 996, 305: 64.[11]Smartt, S. J., Dufton, P. L., Rolleston, W. R. J., The chemical composition towards the galactic anti-centre, A&A, 996, 30: 23.[12]Binette, L., Dopita, M. A., D'Odorico, S. et al., The galactic abundance gradient from supernova remnant observations, A&A, 982, 5: 35.[13]Dauphole, B., Geffert, M., Colin, J. et al., The kinematics of globular clusters, apocentric distances and a halo metallicity gradient, A&A, 996, 33: 9.[14]Marsakov, V. A., Shevelev, Y. G., Catalogue of ages, metallicities, orbital elements and other parameters for nearby F stars, BICDS, 995, 47: 3.[15]Cayrel de Strobel, G., Soubiran, C., Friel, E. D. et al., A 8. The effect of rotation on the abundances of the chemical elements of the A-type stars in the Praesepe cluster Fossati, L.; Bagnulo, S.; Landstreet, J.; Wade, G.; Kochukhov, O.; Monier, R.; Weiss, W.; Gebran, M. 2008-06-01 Aims: We study how chemical abundances of late B-, A-, and early F-type stars evolve with time, and we search for correlations between the abundance of chemical elements and other stellar parameters, such as effective temperature and υ sin i. Methods: We observed a large number of B-, A-, and F-type stars belonging to open clusters of different ages. In this paper we concentrate on the Praesepe cluster (log t = 8.85), for which we have obtained high-resolution, high signal-to-noise ratio spectra of sixteen normal A- and F-type stars and one Am star, using the SOPHIE spectrograph of the Observatoire de Haute-Provence. For all the observed stars, we derived fundamental parameters and chemical abundances. In addition, we discuss another eight Am stars belonging to the same cluster, for which the abundance analysis had been presented in a previous paper. Results: We find a strong correlation between the peculiarity of Am stars and υ sin i. The abundance of the elements underabundant in Am stars increases with υ sin i, while it decreases for the overabundant elements. Chemical abundances of various elements appear correlated with the iron abundance. Based on observations made at the Observatoire de Haute-Provence. Figures [see full textsee full textsee full text] to [see full textsee full textsee full text] are only available in electronic form at http://www.aanda.org 9. The magnetic field topology and chemical abundance distributions of the Ap star HD 32633 Silvester, James; Kochukhov, Oleg; Wade, G. A. 2015-01-01 Previous observations of the Ap star HD 32633 indicated that its magnetic field was unusually complex in nature and could not be characterised by a simple dipolar structure. Here we derive magnetic field maps and chemical abundance distributions for this star using full Stokes vector (Stokes$IQUV$) high-resolution observations obtained with the ESPaDOnS and Narval spectropolarimeters. Our maps, produced using the Invers10 magnetic Doppler imaging (MDI) code, show that HD 32633 has a strong m... 10. Detailed chemical abundances of distant RR Lyrae stars in the Virgo Stellar Stream Duffau, S; Vivas, A K; Hansen, C J; Zoccali, M; Catelan, M; Minniti, D; Grebel, E K 2016-01-01 We present the first detailed chemical abundances for distant RR Lyrae stars members of the Virgo Stellar Stream (VSS), derived from X-Shooter medium-resolution spectra. Sixteen elements from carbon to barium have been measured in six VSS RR Lyrae stars, sampling all main nucleosynthetic channels. For the first time we will be able to compare in detail the chemical evolution of the VSS progenitor with those of Local Group dwarf spheroidal galaxies (LG dSph) as well as the one of the smooth halo. 11. A holistic abundance analysis of r-rich stars Zhang, Jiang; Zhang, Bo; 10.1111/j.1365-2966.2010.17374.x 2010-01-01 The chemical abundances of metal-poor stars are an excellent test bed by which to set new constraints on models of neutron-capture processes at low metallicity. Some r-process-rich (hereafter r-rich) metal-poor stars, such as HD221170, show an overabundance of the heavier neutron-capture elements and excesses of lighter neutron-capture elements. The study of these r-rich stars could give us a better understanding of weak and main r-process nucleosynthesis at low metallicity. Based on conclusions from the observation of metal-poor stars and neutron-capture element nucleosynthesis theory, we set up a model to determine the relative contributions from weak and main r-processes to the heavy-element abundances in metal-poor stars. Using this model, we find that the abundance patterns of light elements for most sample stars are close to the pattern of weak r-process stars, and those of heavier neutron-capture elements very similar to the pattern of main r-process stars, while the lighter neutron-capture elements ca... 12. Chemical abundances of the metal-poor horizontal-branch stars CS 22186-005 and CS 30344-033 Caliskan, S; Bonifacio, P; Christlieb, N; Monaco, L; Beers, T C; Albayrak, B; Sbordone, L 2014-01-01 We report on a chemical-abundance analysis of two very metal-poor horizontal-branch stars in the Milky Way halo: CS 22186-005 ([Fe/H]=-2.70) and CS 30344-033 ([Fe/H]=-2.90). The analysis is based on high-resolution spectra obtained at ESO, with the spectrographs HARPS at the 3.6 m telescope, and UVES at the VLT. We adopted one-dimensional, plane-parallel model atmospheres assuming local thermodynamic equilibrium. We derived elemental abundances for 13 elements for CS 22186-005 and 14 elements for CS 30344-033. This study is the first abundance analysis of CS 30344-033. CS 22186-005 has been analyzed previously, but we report here the first measurement of nickel (Ni; Z = 28) for this star, based on twenty-two NiI lines ([Ni/Fe]=-0.21$\\pm$0.02); the measurement is significantly below the mean found for most metal-poor stars. Differences of up to 0.5 dex in [Ni/Fe] ratios were determined by different authors for the same type of stars in the literature, which means that it is not yet possible to conclude that th... 13. Abundance analysis of extremely metal-poor stars Hansen, T.; Hansen, C. J.; Christlieb, N.; Andersen, J. 2016-01-01 The outer atmosphere of the first generations of low-mass (M elements into the early ISM. Thus a detailed abundance analysis of low-mass, metal-poor stars can help us track these gasses and provide insight into the formation processes that took place in the very early stages of our Galaxy. Preliminary result of a 25-star homogeneously analysed sample of metal- poor candidates from the Hamburg/ESO survey is presented. The main focus is on the most metal-poor stars of the sample; stars with [Fe/H] abundance pattern of these ultra metal-poor (UMP) stars is used to extract key information of the earliest ongoing formation processes (ranging from hydrostatic burning to neutron-capture processes). 14. A determination of the thick disk chemical abundance distribution: Implications for galaxy evolution Gilmore, Gerard; Wyse, Rosemary F. G.; Jones, Bryn J. 1995-01-01 We present a determination of the thick disk iron abundance distribution obtained from an in situ sample of F/G stars. These stars are faint, 15 less than or approximately = V less than or approximately = 18, selected on the basis of color, being a subset of the larger survey of Gilmore and Wyse designed to determine the properties of the stellar populations several kiloparsecs from the Sun. The fields studied in the present paper probe the iron abundance distribution of the stellar populations of the galaxy at 500-3000 pc above the plane, at the solar Galactocentric distance. The derived chemical abundance distributions are consistent with no metallicity gradients in the thick disk over this range of vertical distance, and with an iron abundance distribution for the thick disk that has a peak at -0.7 dex. The lack of a vertical gradient argues against slow, dissipational settling as a mechanism for the formation of the thick disk. The photometric and metallicity data support a turn-off of the thick disk that is comparable in age to the metal-rich globular clusters, or greater than or approximately = 12 Gyr, and are consistent with a spread to older ages. 15. Chemical abundances and properties of the ionized gas in NGC 1705 Annibali, F; Pasquali, A; Aloisi, A; Mignoli, M; Romano, D 2015-01-01 We obtained [O III] narrow-band imaging and multi-slit MXU spectroscopy of the blue compact dwarf (BCD) galaxy NGC 1705 with FORS2@VLT to derive chemical abundances of PNe and H II regions and, more in general, to characterize the properties of the ionized gas. The auroral [O III]\\lambda4363 line was detected in all but one of the eleven analyzed regions, allowing for a direct estimate of their electron temperature. The only object for which the [O III]\\lambda4363 line was not detected is a possible low-ionization PN, the only one detected in our data. For all the other regions, we derived the abundances of Nitrogen, Oxygen, Neon, Sulfur and Argon out to ~1 kpc from the galaxy center. We detect for the first time in NGC 1705 a negative radial gradient in the oxygen metallicity of -0.24 \\pm 0.08 dex kpc^{-1}. The element abundances are all consistent with values reported in the literature for other samples of dwarf irregular and blue compact dwarf galaxies. However, the average (central) oxygen abundance, 12 +... 16. Chemical Feature of Eu abundance in the Draco dwarf spheroidal galaxy Tsujimoto, Takuji; Shigeyama, Toshikazu; Aoki, Wako 2015-01-01 Chemical abundance of r-process elements in nearby dwarf spheroidal (dSph) galaxies is a powerful tool to probe the site of r-process since their small-mass scale can sort out individual events producing r-process elements. A merger of binary neutron stars is a promising candidate of this site. In faint, or less massive dSph galaxies such as the Draco, a few binary neutron star mergers are expected to have occurred at most over the whole past. We have measured chemical abundances including Eu and Ba of three red giants in the Draco dSph by Subaru/HDS observation. The Eu detection for one star with [Fe/H]=-1.45 confirms a broadly constant [Eu/H] of ~-1.3 for stars with [Fe/H]>-2. This feature is shared by other dSphs with similar masses, i.e., the Sculptor and the Carina, and suggests that neutron star merger is the origin of r-process elements in terms of its rarity. In addition, two very metal-poor stars with [Fe/H]=-2.12 and -2.51 are found to exhibit very low Eu abundances such as [Eu/H]<-2 with an impl... 17. CHEMICAL ABUNDANCES OF METAL-POOR RR LYRAE STARS IN THE MAGELLANIC CLOUDS We present for the first time a detailed spectroscopic study of chemical element abundances of metal-poor RR Lyrae stars in the Large and Small Magellanic Cloud (LMC and SMC). Using the MagE echelle spectrograph at the 6.5 m Magellan telescopes, we obtain medium resolution (R ∼ 2000-6000) spectra of six RR Lyrae stars in the LMC and three RR Lyrae stars in the SMC. These stars were chosen because their previously determined photometric metallicities were among the lowest metallicities found for stars belonging to the old populations in the Magellanic Clouds. We find the spectroscopic metallicities of these stars to be as low as [Fe/H]spec = –2.7 dex, the lowest metallicity yet measured for any star in the Magellanic Clouds. We confirm that for metal-poor stars, the photometric metallicities from the Fourier decomposition of the light curves are systematically too high compared to their spectroscopic counterparts. However, for even more metal-poor stars below [Fe/H]phot < –2.8 dex this trend is reversed and the spectroscopic metallicities are systematically higher than the photometric estimates. We are able to determine abundance ratios for 10 chemical elements (Fe, Na, Mg, Al, Ca, Sc, Ti, Cr, Sr, and Ba), which extend the abundance measurements of chemical elements for RR Lyrae stars in the Clouds beyond [Fe/H] for the first time. For the overall [α/Fe] ratio, we obtain an overabundance of 0.36 dex, which is in very good agreement with results from metal-poor stars in the Milky Way halo as well as from the metal-poor tail in dwarf spheroidal galaxies. Comparing the abundances with those of the stars in the Milky Way halo we find that the abundance ratios of stars of both populations are consistent with another. Therefore, we conclude that from a chemical point of view early contributions from Magellanic-type galaxies to the formation of the Galactic halo as claimed in cosmological models are plausible. 18. CD -24°17504: A New Comprehensive Element Abundance Analysis Jacobson, Heather R.; Frebel, Anna 2015-07-01 With [Fe/H] ˜ -3.3, CD -24°17504 is a canonical metal-poor main-sequence turn-off star. Though it has appeared in numerous literature studies, the most comprehensive abundance analysis for the star based on high-resolution, high signal-to-noise ratio (S/N) spectra is nearly 15 years old. We present a new detailed abundance analysis for 21 elements based on combined archival Keck-HIRES and Very Large Telescope-UVES spectra of the star that is higher in both spectral resolution and S/N than previous data. Our results are very similar to those of an earlier comprehensive study of the star, but we present for the first time a carbon abundance from the CH G-band feature as well as improved upper limits for neutron-capture species such as Y, Ba, and Eu. In particular, we find that CD -24°17504 has [Fe/H] = -3.41, [C/Fe] = +1.10, [Sr/H] = -4.68, and [Ba/H] ≤ -4.46, making it a carbon-enhanced metal-poor star with neutron-capture element abundances among the lowest measured in Milky Way halo stars. This work is based on data obtained from the ESO Science Archive Facility and associated with Programs 68.D-0094(A) and 073.D-0024(A). This work is also based on data obtained from the Keck Observatory Archive (KOA), which is operated by the W.M. Keck Obsevatory and the NASA Exoplanet Science Institute (NExScI), under contract with the National Aeronautics and Space Administration. These data are associated with Program C01H (P.I. Mélendez). 19. Unveiling the Nature of the "Green Pea" Galaxies: Oxygen and Nitrogen Chemical Abundances Amorín, R. O.; Pérez-Montero, E.; Vílchez, J. M. 2011-07-01 We present recent results on the oxygen and nitrogen chemical abundances in the extremely compact, low-mass starburst galaxies at redshifts 0.1-0.3 usually referred to as "green pea" galaxies. We show that they are metal-poor galaxies (~1/5 solar) with lower oxygen abundances than star-forming galaxies of similar mass and N/O ratios unusually high for galaxies of the same metallicity. Recent, rapid, and massive inflows of cold gas, possibly coupled with enriched outflows from supernova winds, are used to explain the results. This is consistent with the known "pea" galaxy properties and suggest that these rare objects are experiencing a short and extreme phase in their evolution. 20. A DETAILED LOOK AT CHEMICAL ABUNDANCES IN MAGELLANIC CLOUD PLANETARY NEBULAE. I. THE SMALL MAGELLANIC CLOUD We present an analysis of elemental abundances of He, N, O, Ne, S, and Ar in Magellanic Cloud planetary nebulae (PNe) and focus initially on 14 PNe in the Small Magellanic Cloud (SMC). We derive the abundances from a combination of deep, high-dispersion optical spectra, as well as mid-infrared (IR) spectra from the Spitzer Space Telescope. A detailed comparison with prior SMC PN studies shows that significant variations in relative emission-line flux determinations among the authors, lead to systematic discrepancies in derived elemental abundances between studies that are ∼>0.15 dex, in spite of similar analysis methods. We use ionic abundances derived from IR emission lines, including those from ionization stages not observable in the optical, to examine the accuracy of some commonly used recipes for ionization correction factors (ICFs). These ICFs, which were developed for ions observed in the optical and ultraviolet, relate ionic abundances to total elemental abundances. We find that most of these ICFs work very well even in the limit of substantially sub-solar metallicities, except for PNe with very high ionization. Our abundance analysis shows enhancements of He and N that are predicted from prior dredge-up processes of the progenitors on the asymptotic giant branch (AGB), as well as the well-known correlations among O, Ne, S, and Ar that are little affected by nucleosynthesis in this mass range. We identify MG 8 as an interesting limiting case of a PN central star with a ∼3.5 Msun progenitor in which hot-bottom burning did not occur in its prior AGB evolution. We find no evidence for O depletion in the progenitor AGB stars via the O-N cycle, which is consistent with predictions for lower-mass stars. We also find low S/O ratios relative to SMC H II regions, with a deficit comparable to what has been found for Galactic PNe. Finally, the elemental abundances of one object, SMP-SMC 11, are more typical of SMC H II regions, which raises some doubt about its 1. The magnetic field topology and chemical abundance distributions of the Ap star HD 32633 Silvester, J.; Kochukhov, O.; Wade, G. A. 2015-10-01 Previous observations of the Ap star HD 32633 indicated that its magnetic field was unusually complex in nature and could not be characterized by a simple dipolar structure. Here we derive magnetic field maps and chemical abundance distributions for this star using full Stokes vector (Stokes IQUV) high-resolution observations obtained with the ESPaDOnS and Narval spectropolarimeters. Our maps, produced using the INVERS10 magnetic Doppler imaging (MDI) code, show that HD 32633 has a strong magnetic field which features two large regions of opposite polarity but deviates significantly from a pure dipole field. We use a spherical harmonic expansion to characterize the magnetic field and find that the harmonic energy is predominately in the ℓ = 1 and 2 poloidal modes with a small toroidal component. At the same time, we demonstrate that the observed Stokes parameter profiles of HD 32633 cannot be fully described by either a dipolar or dipolar plus quadrupolar field geometry. We compare the magnetic field topology of HD 32633 with other early-type stars for which MDI analyses have been performed, supporting a trend of increasing field complexity with stellar mass. We then compare the magnetic field topology of HD 32633 with derived chemical abundance maps for the elements Mg, Si, Ti, Cr, Fe, Ni and Nd. We find that the iron-peak elements show similar distributions, but we are unable to find a clear correlation between the location of local chemical enhancements or depletions and the magnetic field structure. 2. Chemical Elements Abundance in the Universe and the Origin of Life Valkovic, Vlado 2016-01-01 Element synthesis which started with p-p chain has resulted in several specific characteristics including lack of any stable isotope having atomic masses 5 or 8. The carbon to oxygen ratio is fixed early by the chain of coincidences. These, remarkably fine-tuned, conditions are responsible for our own existence and indeed the existence of any carbon based life in the Universe. Chemical evolution of galaxies reflects in the changes of chemical composition of stars, interstellar gas and dust. The evolution of chemical element abundances in a galaxy provides a clock for galactic aging. On the other hand, the living matter on the planet Earth needs only some elements for its existence. Compared with element requirements of living matter a hypothesis is put forward, by accepting the Anthropic Principle, which says: life as we know, (H-C-N-O) based, relying on the number of bulk and trace elements originated when two element abundance curves, living matter and galactic, coincided. This coincidence occurring at part... 3. Galactic chemical abundance evolution in the solar neighborhood up to the Iron peak Alibes, A; Canal, R; Alibes, Andreu; Labay, Javier; Canal, Ramon 2000-01-01 We have developed a detailed standard chemical evolution model to study the evolution of all the chemical elements up to the iron peak in the solar vicinity. We consider that the Galaxy was formed through two episodes of exponentially decreasing infall, out of extragalactic gas. In a first infall episode, with a duration of$\\sim$1 Gyr, the halo and the thick disk were assembled out of primordial gas, while the thin disk formed in a second episode of infall of slightly enriched extragalactic gas, with much longer timescale. The model nicely reproduces the main observational constraints of the solar neighborhood, and the calculated elemental abundances at the time of the solar birth are in excellent agreement with the solar abundances. By the inclusion of metallicity dependent yields for the whole range of stellar masses we follow the evolution of 76 isotopes of all the chemical elements between hydrogen and zinc. Those results are confronted with a large and recent body of observational data, and we discuss ... 4. Stokes IQUV magnetic Doppler imaging of Ap stars - III. Next generation chemical abundance mapping of α2 CVn Silvester, J.; Kochukhov, O.; Wade, G. A. 2014-10-01 In a previous paper, we presented an updated magnetic field map for the chemically peculiar star α2 CVn using ESPaDOnS and Narval time-resolved high-resolution Stokes IQUV spectra. In this paper, we focus on mapping various chemical element distributions on the surface of α2 CVn. With the new magnetic field map and new chemical abundance distributions, we can investigate the interplay between the chemical abundance structures and the magnetic field topology on the surface of α2 CVn. Previous attempts at chemical abundance mapping of α2 CVn relied on lower resolution data. With our high-resolution (R = 65 000) data set, we present nine chemical abundance maps for the elements O, Si, Cl, Ti, Cr, Fe, Pr, Nd and Eu. We also derive an updated magnetic field map from Fe and Cr lines in Stokes IQUV and O and Cl in Stokes IV. These new maps are inferred from line profiles in Stokes IV using the magnetic Doppler imaging code INVERS10. We examine these new chemical maps and investigate correlations with the magnetic topology of α2 CVn. We show that chemical abundance distributions vary between elements, with two distinct groups of elements; one accumulates close to the negative part of the radial field, whilst the other group shows higher abundances located where the radial magnetic field is of the order of 2 kG regardless of the polarity of the radial field component. We compare our results with previous works which have mapped chemical abundance structures of Ap stars. With the exception of Cr and Fe, we find no clear trend between what we reconstruct and other mapping results. We also find a lack of agreement with theoretical predictions. This suggests that there is a gap in our theoretical understanding of the formation of horizontal chemical abundance structures and the connection to the magnetic field in Ap stars. 5. Chemical abundances for the transiting planet host stars OGLE-TR-10, 56, 111, 113, 132 and TrES-1. Abundances in different galactic populations Santos, N C; Israelian, G; Mayor, M; Melo, C; Queloz, D; Udry, S; Ribeiro, J P; Jorge, S 2006-01-01 We used the UVES spectrograph (VLT-UT2 telescope) to obtain high-resolution spectra of 6 stars hosting transiting planets, namely for OGLE-TR-10, 56, 111, 113, 132 and TrES-1. The spectra are now used to derive and discuss the chemical abundances for C, O, Na, Mg, Al, Si, S, Ca, Sc, Ti, V, Cr, Mn, Co, Ni, Cu and Zn. Abundances were derived in LTE, using 1-D plane-parallel Kurucz model atmospheres. For S, Zn and Cu we used a spectral synthesis procedure, while for the remaining cases the abundances were derived from measurements of line-equivalent widths. The resulting abundances are compared with those found for stars in the solar neighborhood. Distances and galactic coordinates are estimated for the stars. We conclude that besides being particularly metal-rich, with small possible exceptions OGLE-TR-10, 56, 111, 113, 132 and TrES-1 are chemically undistinguishable from the field (thin disk) stars regarding their [X/Fe] abundances. This is particularly relevant for the most distant of the targets, located at ... 6. The Complex Chemical Abundances and Evolution of the Sagittarius Dwarf Spheroidal Galaxy Smecker-Hane, T A; Smecker-Hane, Tammy A.; William, Andrew Mc 2002-01-01 We report on the chemical abundances derived from high-dispersion spectra of 14 red giant stars in the Sagittarius dwarf spheroidal (Sgr dSph) galaxy. The stars span a wide range of metallicities, -1.6 < [Fe/H] < -0.1 dex, and exhibit very unusual abundance variations. For metal-poor stars with [Fe/H] < -1, [alpha/Fe] \\approx +0.3 similar to Galactic halo stars, but for more metal-rich stars the relationship of [alpha/Fe] as a function of [Fe/H] is lower than that of the Galactic disk by 0.1 dex. The light elements [Al/Fe] and [Na/Fe] are sub-solar by an even larger amount, 0.4 dex. The pattern of neutron-capture heavy elements, as indicated by [La/Fe] and [La/Eu], shows an increasing s-process component with increasing [Fe/H], up to [La/Fe] \\sim +0.7 dex for the most metal-rich Sgr dSph stars. The large [La/Y] ratios show that the s-process enrichments came from the metal-poor population. We can best understand the observed abundances with a model in which the Sgr dSph formed stars over a many Gyr a... 7. Chemical Abundances in the Globular Clusters NGC 6229 and NGC 6779 A., Khamidullina D; V., Shimansky V; E, Davoust 2015-01-01 Long-slit medium-resolution spectra of the Galactic globular clusters (GCs) NGC6229 and NGC6779, obtained with the CARELEC spectrograph at the 1.93-m telescope of the Haute-Provence observatory, have been used to determine the age, helium abundance (Y), and metallicity [Fe/H] as well as the first estimate of the abundances of C, N, O, Mg, Ca, Ti, and Cr for these objects. We solved this task by comparing the observed spectra and the integrated synthetic spectra, calculated with the use of the stellar atmosphere models with the parameters preset for the stars from these clusters. The model mass estimates,$T_{\\rm eff}$, and log~g were derived by comparing the observed "color-magnitude" diagrams and the theoretical isochrones. The summing-up of the synthetic blanketed stellar spectra was conducted according to the Chabrier mass function. To test the accuracy of the results, we estimated the chemical abundances, [Fe/H],$\\log~t$, and$Y$for the NGC5904 and NGC6254 clusters, which, according to the literature, a... 8. Abundance analysis of barium and mild barium stars Smiljanic, R; Silva, L 2007-01-01 High signal to noise, high resolution spectra were obtained for a sample of normal, mild barium, and barium giants. Atmospheric parameters were determined from the FeI and FeII lines. Abundances for Na, Mg, Al, Si, Ca, Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, Sr, Y, Zr, Ba, La, Ce, Nd, Sm, Eu, and Gd, were determined from equivalent widths and model atmospheres in a differential analysis, with the red giant Eps Vir as the standard star. The different levels of s-process overabundances of barium and mild barium stars were earlier suggested to be related to the stellar metallicity. Contrary to this suggestion, we found in this work no evidence for barium and mild barium to have a different range in metallicity. However, comparing the ratio of abundances of heavy to light s-process elements, we found some evidence that they do not share the same neutron exposure parameter. The exact mechanism controlling this difference is still not clear. As a by-product of this analysis we identify two normal red giants misclass... 9. ELEMENTAL ABUNDANCES AND THEIR IMPLICATIONS FOR THE CHEMICAL ENRICHMENT OF THE BOOeTES I ULTRAFAINT GALAXY Gilmore, Gerard [Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA (United Kingdom); Norris, John E.; Yong, David [Research School of Astronomy and Astrophysics, The Australian National University, Weston, ACT 2611 (Australia); Monaco, Lorenzo [European Southern Observatory, Alonso de Cordova 3107, Casilla 19001, Santiago 19 (Chile); Wyse, Rosemary F. G. [Department of Physics and Astronomy, The Johns Hopkins University, 3900 North Charles Street, Baltimore, MD 21218 (United States); Geisler, D., E-mail: [email protected], E-mail: [email protected], E-mail: [email protected], E-mail: [email protected], E-mail: [email protected], E-mail: [email protected] [Departamento de Astronomia, Universidad de Concepcion (Chile) 2013-01-20 We present a double-blind analysis of high-dispersion spectra of seven red giant members of the Booetes I ultrafaint dwarf spheroidal galaxy, complemented with re-analysis of a similar spectrum of an eighth-member star. The stars cover [Fe/H] from -3.7 to -1.9 and include a CEMP-no star with [Fe/H] = -3.33. We conclude from our chemical abundance data that Booetes I has evolved as a self-enriching star-forming system, from essentially primordial initial abundances. This allows us uniquely to investigate the place of CEMP-no stars in a chemically evolving system, in addition to limiting the timescale of star formation. The elemental abundances are formally consistent with a halo-like distribution, with enhanced mean [{alpha}/Fe] and small scatter about the mean. This is in accord with the high-mass stellar initial mass function in this low-stellar-density, low-metallicity system being indistinguishable from the present-day solar neighborhood value. There is a non-significant hint of a decline in [{alpha}/Fe] with [Fe/H]; together with the low scatter, this requires low star formation rates, allowing time for supernova ejecta to be mixed over the large spatial scales of interest. One star has very high [Ti/Fe], but we do not confirm a previously published high value of [Mg/Fe] for another star. We discuss the existence of CEMP-no stars, and the absence of any stars with lower CEMP-no enhancements at higher [Fe/H], a situation that is consistent with knowledge of CEMP-no stars in the Galactic field. We show that this observation requires there be two enrichment paths at very low metallicities: CEMP-no and 'carbon-normal'. 10. Chemical abundance gradients from open clusters in the Milky Way disk: results from the APOGEE survey Cunha, Katia; Souto, Diogo; Thompson, Benjamin; Zasowski, Gail; Prieto, Carlos Allende; Carrera, Ricardo; Chiappini, Cristina; Donor, John; Garcia-Hernandez, Anibal; Perez, Ana Elia Garcia; Hayden, Michael R; Holtzman, Jon; Jackson, Kelly M; Johnson, Jennifer A; Majewski, Steven R; Meszaros, Szabolcs; Meyer, Brianne; Nidever, David L; O'Connell, Julia; Schiavon, Ricardo P; Schultheis, Mathias; Shetrone, Matthew; Simmons, Audrey; Smith, Verne V; Zamora, Olga 2016-01-01 Metallicity gradients provide strong constraints for understanding the chemical evolution of the Galaxy. We report on radial abundance gradients of Fe, Ni, Ca, Si, and Mg obtained from a sample of 304 red-giant members of 29 disk open clusters, mostly concentrated at galactocentric distances between ~8 - 15 kpc, but including two open clusters in the outer disk. The observations are from the APOGEE survey. The chemical abundances were derived automatically by the ASPCAP pipeline and these are part of the SDSS III Data Release 12. The gradients, obtained from least squares fits to the data, are relatively flat, with slopes ranging from -0.026 to -0.033 dex/kpc for the alpha-elements [O/H], [Ca/H], [Si/H] and [Mg/H] and -0.035 dex/kpc and -0.040 dex/kpc for [Fe/H] and [Ni/H], respectively. Our results are not at odds with the possibility that metallicity ([Fe/H]) gradients are steeper in the inner disk (R_GC ~7 - 12 kpc) and flatter towards the outer disk. The open cluster sample studied spans a significant ran... 11. Reconstructing the star formation history of the Milky Way disc(s) from chemical abundances Snaith, O; Di Matteo, P; Lehnert, M D; Combes, F; Katz, D; Gómez, A 2014-01-01 We develop a chemical evolution model in order to study the star formation history of the Milky Way. Our model assumes that the Milky Way is formed from a closed box-like system in the inner regions, while the outer parts of the disc experience some accretion. Unlike the usual procedure, we do not fix the star formation prescription (e.g. Kennicutt law) in order to reproduce the chemical abundance trends. Instead, we fit the abundance trends with age in order to recover the star formation history of the Galaxy. Our method enables one to recover with unprecedented accuracy the star formation history of the Milky Way in the first Gyrs, in both the inner (R9-10kpc) discs as sampled in the solar vicinity. We show that, in the inner disc, half of the stellar mass formed during the thick disc phase, in the first 4-5 Gyr. This phase was followed by a significant dip in the star formation activity (at 8-9 Gyr) and a period of roughly constant lower level star formation for the remaining 8 Gyr. The thick disc phase ha... 12. The Contribution of Chemical Abundances in Nova Ejecta to the Interstellar Medium Li, Fanger; Lu, Guoliang; Wang, Zhaojun 2016-01-01 According to the nova model from \\citet{Yaron2005} and \\citet{Jose1998} and using Monte Carlo simulation method, we investigate the contribution of chemical abundances in nova ejecta to the interstellar medium (ISM) of the Galaxy. We find that the ejected mass by classical novae (CNe) is about$2.7\\times10^{-3} \\rm M_\\odot\\ {\\rm yr^{-1}}$. In the nova ejecta, the isotopic ratios of C, N and O, that is,$^{13}$C/$^{12}$C,$^{15}$N/$^{14}$N and$^{17}$O/$^{16}$O, are higher about one order of magnitude than those in red giants. We estimate that about 10$\\%$, 5$\\%$and 20$\\%$of$^{13}$C,$^{15}$N and$^{17}$O in the ISM of the Galaxy come from nova ejecta, respectively. However, the chemical abundances of C, N and O calculated by our model can not cover all of observational values. This means that there is still a long way to go for understanding novae. 13. The contribution of chemical abundances in nova ejecta to the interstellar medium Li, Fanger; Zhu, Chunhua; Lü, Guoliang; Wang, Zhaojun 2016-06-01 According to the nova model from Yaron et al. (2005, ApJ, 418, 794) and José and Hernanz (1998, ApJ, 494, 680), and using a Monte Carlo simulation method, we investigate the contribution of chemical abundances in nova ejecta to the interstellar medium (ISM) of the Galaxy. We find that the mass ejected from classical novae is about 2.7 × 10-3 M⊙ yr-1. In the nova ejecta, the isotopic ratios of C, N, and O, that is, 13C/12C, 15N/14N, and 17O/16O, are higher by about one order of magnitude than those in red giants. We estimate that about 10%, 5%, and 20% of 13C, 15N, and 17O in the ISM of the Galaxy come from nova ejecta, respectively. However, the chemical abundances of C, N, and O calculated by our model cannot cover all observational values. This means that there is still a long way to go to understand novae. 14. Verrucomicrobial community structure and abundance as indicators for changes in chemical factors linked to soil fertility. Navarrete, Acacio Aparecido; Soares, Tielle; Rossetto, Raffaella; van Veen, Johannes Antonie; Tsai, Siu Mui; Kuramae, Eiko Eurya 2015-09-01 Here we show that verrucomicrobial community structure and abundance are extremely sensitive to changes in chemical factors linked to soil fertility. Terminal restriction fragment length polymorphism fingerprint and real-time quantitative PCR assay were used to analyze changes in verrucomicrobial communities associated with contrasting soil nutrient conditions in tropical regions. In case study Model I ("Slash-and-burn deforestation") the verrucomicrobial community structures revealed disparate patterns in nutrient-enriched soils after slash-and-burn deforestation and natural nutrient-poor soils under an adjacent primary forest in the Amazonia (R = 0.819, P = 0.002). The relative proportion of Verrucomicrobia declined in response to increased soil fertility after slash-and-burn deforestation, accounting on average, for 4 and 2 % of the total bacterial signal, in natural nutrient-poor forest soils and nutrient-enriched deforested soils, respectively. In case study Model II ("Management practices for sugarcane") disparate patterns were revealed in sugarcane rhizosphere sampled on optimal and deficient soil fertility for sugarcane (R = 0.786, P = 0.002). Verrucomicrobial community abundance in sugarcane rhizosphere was negatively correlated with soil fertility, accounting for 2 and 5 % of the total bacterial signal, under optimal and deficient soil fertility conditions for sugarcane, respectively. In nutrient-enriched soils, verrucomicrobial community structures were related to soil factors linked to soil fertility, such as total nitrogen, phosphorus, potassium and sum of bases, i.e., the sum of calcium, magnesium and potassium contents. We conclude that community structure and abundance represent important ecological aspects in soil verrucomicrobial communities for tracking the changes in chemical factors linked to soil fertility under tropical environmental conditions. PMID:26184407 15. Chemical Analysis Facility Federal Laboratory Consortium — FUNCTION: Uses state-of-the-art instrumentation for qualitative and quantitative analysis of organic and inorganic compounds, and biomolecules from gas, liquid, and... 16. Microprocessors in automatic chemical analysis Application of microprocessors to programming and computing of solutions chemical analysis by a sequential technique is examined. Safety, performances reliability are compared to other methods. An example is given on uranium titration by spectrophotometry 17. Chemical substructure analysis in toxicology A preliminary examination of chemical-substructure analysis (CSA) demonstrates the effective use of the Chemical Abstracts compound connectivity file in conjunction with the bibliographic file for relating chemical structures to biological activity. The importance of considering the role of metabolic intermediates under a variety of conditions is illustrated, suggesting structures that should be examined that may exhibit potential activity. This CSA technique, which utilizes existing large files accessible with online personal computers, is recommended for use as another tool in examining chemicals in drugs. 2 refs., 4 figs 18. Chemical substructure analysis in toxicology Beauchamp, R.O. Jr. [Center for Information on Toxicology and Environment, Raleigh, NC (United States) 1990-12-31 A preliminary examination of chemical-substructure analysis (CSA) demonstrates the effective use of the Chemical Abstracts compound connectivity file in conjunction with the bibliographic file for relating chemical structures to biological activity. The importance of considering the role of metabolic intermediates under a variety of conditions is illustrated, suggesting structures that should be examined that may exhibit potential activity. This CSA technique, which utilizes existing large files accessible with online personal computers, is recommended for use as another tool in examining chemicals in drugs. 2 refs., 4 figs. 19. DETAILED CHEMICAL ABUNDANCES OF FOUR STARS IN THE UNUSUAL GLOBULAR CLUSTER PALOMAR 1 Detailed chemical abundances for 21 elements are presented for four red giants in the anomalous outer halo globular cluster Palomar 1 (RGC = 17.2 kpc, Z = 3.6 kpc) using high-resolution (R = 36, 000) spectra from the High Dispersion Spectrograph on the Subaru Telescope. Pal 1 has long been considered unusual because of its low surface brightness, sparse red giant branch, young age, and its possible association with two extragalactic streams of stars. This paper shows that its chemistry further confirms its unusual nature. The mean metallicity of the four stars, [Fe/H] = -0.60 ± 0.01, is high for a globular cluster so far from the Galactic center, but is low for a typical open cluster. The [α/Fe] ratios, though in agreement with the Galactic stars within the 1σ errors, agree best with the lower values in dwarf galaxies. No signs of the Na/O anticorrelation are detected in Pal 1, though Na appears to be marginally high in all four stars. Pal 1's neutron-capture elements are also unusual: its high [Ba/Y] ratio agrees best with dwarf galaxies, implying an excess of second-peak over first-peak s-process elements, while its [Eu/α] and [Ba/Eu] ratios show that Pal 1's contributions from the r-process must have differed in some way from normal Galactic stars. Therefore, Pal 1 is unusual chemically, as well in its other properties. Pal 1 shares some of its unusual abundance characteristics with the young clusters associated with the Sagittarius dwarf galaxy remnant and the intermediate-age LMC clusters, and could be chemically associated with the Canis Majoris overdensity; however, it does not seem to be similar to the Monoceros/Galactic Anticenter Stellar Stream. 20. Instrumental Neutron Activation Analysis in archaeology interpretation beyond elemental abundance Application of instrumental neutron activation analysis to the study of archaeological ceramics involves the determination of the source or sources used to produce pottery. Groups of relatively homogeneous elemental abundances are shown to be statically distinct from one another often leading to the assesment of what was locally produced and what was imported to a site. These assesment, however are among the most preliminary interpretations. Archaeology is concerned with the reasons for artificial distributions and how and why the distribution varied through time 3 reasons that include the social and political basis of ancient economics and how these responded to other factors, such as ideology. These objectives are addressed through the increasing refinement of compositional groups leading toward greater specificity of attribution. In so doing the role of analytical precision among other considerations groves in importance. This paper illustration some of these considerations with examples from the U.S. southwest, the Maya region of southern mexico, and lower central America 1. Comparison of amino acids physico-chemical properties and usage of late embryogenesis abundant proteins, hydrophilins and WHy domain. Jaspard, Emmanuel; Hunault, Gilles 2014-01-01 Late Embryogenesis Abundant proteins (LEAPs) comprise several diverse protein families and are mostly involved in stress tolerance. Most of LEAPs are intrinsically disordered and thus poorly functionally characterized. LEAPs have been classified and a large number of their physico-chemical properties have been statistically analyzed. LEAPs were previously proposed to be a subset of a very wide family of proteins called hydrophilins, while a domain called WHy (Water stress and Hypersensitive response) was found in LEAP class 8 (according to our previous classification). Since little is known about hydrophilins and WHy domain, the cross-analysis of their amino acids physico-chemical properties and amino acids usage together with those of LEAPs helps to describe some of their structural features and to make hypothesis about their function. Physico-chemical properties of hydrophilins and WHy domain strongly suggest their role in dehydration tolerance, probably by interacting with water and small polar molecules. The computational analysis reveals that LEAP class 8 and hydrophilins are distinct protein families and that not all LEAPs are a protein subset of hydrophilins family as proposed earlier. Hydrophilins seem related to LEAP class 2 (also called dehydrins) and to Heat Shock Proteins 12 (HSP12). Hydrophilins are likely unstructured proteins while WHy domain is structured. LEAP class 2, hydrophilins and WHy domain are thus proposed to share a common physiological role by interacting with water or other polar/charged small molecules, hence contributing to dehydration tolerance. PMID:25296175 2. Comparison of amino acids physico-chemical properties and usage of late embryogenesis abundant proteins, hydrophilins and WHy domain. Emmanuel Jaspard Full Text Available Late Embryogenesis Abundant proteins (LEAPs comprise several diverse protein families and are mostly involved in stress tolerance. Most of LEAPs are intrinsically disordered and thus poorly functionally characterized. LEAPs have been classified and a large number of their physico-chemical properties have been statistically analyzed. LEAPs were previously proposed to be a subset of a very wide family of proteins called hydrophilins, while a domain called WHy (Water stress and Hypersensitive response was found in LEAP class 8 (according to our previous classification. Since little is known about hydrophilins and WHy domain, the cross-analysis of their amino acids physico-chemical properties and amino acids usage together with those of LEAPs helps to describe some of their structural features and to make hypothesis about their function. Physico-chemical properties of hydrophilins and WHy domain strongly suggest their role in dehydration tolerance, probably by interacting with water and small polar molecules. The computational analysis reveals that LEAP class 8 and hydrophilins are distinct protein families and that not all LEAPs are a protein subset of hydrophilins family as proposed earlier. Hydrophilins seem related to LEAP class 2 (also called dehydrins and to Heat Shock Proteins 12 (HSP12. Hydrophilins are likely unstructured proteins while WHy domain is structured. LEAP class 2, hydrophilins and WHy domain are thus proposed to share a common physiological role by interacting with water or other polar/charged small molecules, hence contributing to dehydration tolerance. 3. Importance of the H2 abundance in protoplanetary disk ices for the molecular layer chemical composition Wakelam, V; Hersant, F; Dutrey, A; Semenov, D; Majumdar, L; Guilloteau, S 2016-01-01 Protoplanetary disks are the target of many chemical studies (both observational and theoretical) as they contain the building material for planets. Their large vertical and radial gradients in density and temperature make them challenging objects for chemical models. In the outer part of these disks, the large densities and low temperatures provide a particular environment where the binding of species onto the dust grains can be very efficient and can affect the gas-phase chemical composition. We attempt to quantify to what extent the vertical abundance profiles and the integrated column densities of molecules predicted by a detailed gas-grain code are affected by the treatment of the molecular hydrogen physisorption at the surface of the grains. We performed three different models using the Nautilus gas-grain code. One model uses a H2 binding energy on the surface of water (440 K) and produces strong sticking of H2. Another model uses a small binding energy of 23 K (as if there were already a monolayer of H... 4. Chemical Abundances of the Highly Obscured Galactic Globular Clusters 2MASS GC02 and Mercer 5 Penaloza, Francisco; Vasquez, Sergio; Borissova, Jura; Kurtev, Radostin; Zoccali, Manuela 2015-01-01 We present the first high spectral resolution abundance analysis of two newly discovered Galactic globular clusters, namely Mercer 5 and 2MASS GC02 residing in regions of high interstellar reddening in the direction of the Galactic center. The data were acquired with the Phoenix high-resolution near-infrared echelle spectrograph at Gemini South (R~50000) in the 15500.0 A - 15575.0 A spectral region. Iron, Oxygen, Silicon, Titanium and Nickel abundances were derived for two red giant stars, in each cluster, by comparing the entire observed spectrum with a grid of synthetic spectra generated with MOOG. We found [Fe/H] values of -0.86 +/- 0.12 and -1.08 +/- 0.13 for Mercer 5 and 2MASS GC02 respectively. The [O/Fe], [Si/Fe] and [Ti/Fe] ratios of the measured stars of Mercer 5 follow the general trend of both bulge field and cluster stars at this metallicity, and are enhanced by > +0.3. The 2MASS GC02 stars have relatively lower ratios, but still compatible with other bulge clusters. Based on metallicity and abund... 5. Chemical abundances in Orion protoplanetary discs: integral field spectroscopy and photoevaporation models of HST 10 Tsamis, Y G; Henney, W J; Walsh, J R; Mesa-Delgado, A 2012-01-01 Photoevaporating protoplanetary discs (proplyds) in the vicinity of hot massive stars, such as those found in Orion, are important objects of study for the fields of star formation, early disc evolution, planetary formation, and H II region astrophysics. Their element abundances are largely unknown, unlike those of the main-sequence stars or the host Orion nebula. We present a spectroscopic analysis of the Orion proplyd HST 10, based on integral field observations with the Very Large Telescope/FLAMES fibre array at a resolution of 0.31" x 0.31". The proplyd and its vicinity are imaged in a variety of emission lines across a 6.6" x 4.2" area. The reddening, electron density and temperature are mapped out from various line diagnostics. The abundances of helium, and eight heavy elements are measured relative to hydrogen using the direct method based on the [O III] electron temperature. The abundance ratios of O/H and S/H are derived without resort to ionization correction factors. We construct dynamic photoevapo... 6. Solar Chemical Abundances Determined with a CO5BOLD 3D Model Atmosphere Caffau, E.; Ludwig, H.-G.; Steffen, M.; Freytag, B.; Bonifacio, P. 2011-02-01 In the last decade, the photospheric solar metallicity as determined from spectroscopy experienced a remarkable downward revision. Part of this effect can be attributed to an improvement of atomic data and the inclusion of NLTE computations, but also the use of hydrodynamical model atmospheres seemed to play a role. This "decrease" with time of the metallicity of the solar photosphere increased the disagreement with the results from helioseismology. With a CO 5 BOLD 3D model of the solar atmosphere, the CIFIST team at the Paris Observatory re-determined the photospheric solar abundances of several elements, among them C, N, and O. The spectroscopic abundances are obtained by fitting the equivalent width and/or the profile of observed spectral lines with synthetic spectra computed from the 3D model atmosphere. We conclude that the effects of granular fluctuations depend on the characteristics of the individual lines, but are found to be relevant only in a few particular cases. 3D effects are not responsible for the systematic lowering of the solar abundances in recent years. The solar metallicity resulting from this analysis is Z=0.0153, Z/ X=0.0209. 7. Bimodal chemical evolution of the Galactic disk and the Barium abundance of Cepheids Lepine, Jacques R D; Barros, Douglas A; Junqueira, Thiago C; Scarano, Sergio 2013-01-01 In order to understand the Barium abundance distribution in the Galactic disk based on Cepheids, one must first be aware of important effects of the corotation resonance, situated a little beyond the solar orbit. The thin disk of the Galaxy is divided in two regions that are separated by a barrier situated at that radius. Since the gas cannot get across that barrier, the chemical evolution is independent on the two sides of it. The barrier is caused by the opposite directions of flows of gas, on the two sides, in addition to a Cassini-like ring void of HI (caused itself by the flows). A step in the metallicity gradient developed at corotation, due to the difference in the average star formation rate on the two sides, and to this lack of communication between them. In connection with this, a proof that the spiral arms of our Galaxy are long-lived (a few billion years) is the existence of this step. When one studies the abundance gradients by means of stars which span a range of ages, like the Cepheids, one has... 8. Episodic Model For Star Formation History and Chemical Abundances in Giant and Dwarf Galaxies Debsarma, Suma; Das, Sukanta; Pfenniger, Daniel 2016-01-01 In search for a synthetic understanding, a scenario for the evolution of the star formation rate and the chemical abundances in galaxies is proposed, combining gas infall from galactic halos, outflow of gas by supernova explosions, and an oscillatory star formation process. The oscillatory star formation model is a consequence of the modelling of the fractional masses changes of the hot, warm and cold components of the interstellar medium. The observed periods of oscillation vary in the range$(0.1-3.0)\\times10^{7}$\\,yr depending on various parameters existing from giant to dwarf galaxies. The evolution of metallicity varies in giant and dwarf galaxies and depends on the outflow process. Observed abundances in dwarf galaxies can be reproduced under fast outflow together with slow evaporation of cold gases into hot gas whereas slow outflow and fast evaporation is preferred for giant galaxies. The variation of metallicities in dwarf galaxies supports the fact that low rate of SNII production in dwarf galaxies i... 9. On the oxygen and nitrogen chemical abundances and the evolution of the "green pea" galaxies Amorín, Ricardo O; Vílchez, J M 2010-01-01 We have investigated the oxygen and nitrogen chemical abundances in extremely compact star-forming galaxies with redshifts between$\\sim$0.11-0.35, popularly referred to as "green peas". Direct and strong-line methods sensitive to the N/O ratio applied to their SDSS spectra reveals that these systems are genuine metal-poor galaxies, with mean oxygen abundances 20% solar. At a given metallicity these galaxies display systematically large N/O ratios compared to normal galaxies, which can explain the strong difference between our metallicities measurements and previous ones. While their N/O ratios follow the relation with stellar mass of local star-forming galaxies in the SDSS, we find that the mass--metallicity relation of the "green peas" is offset$\\ga$0.3 dex to lower metallicities. We argue that recent interaction-induced inflow of gas, possibly coupled with a selective metal-rich gas loss, driven by supernova winds, may explain our findings and the known galaxy properties, namely high specific star formati... 10. A non-LTE abundance analysis of the post-AGB star ROA 5701 Thompson, H M A; Dufton, P L; Ryans, R S I; Smoker, J V 2006-01-01 An analysis of high-resolution Anglo-Australian Telescope (AAT)/ University College London Echelle Spectrograph (UCLES) optical spectra for the ultraviolet (UV)-bright star ROA 5701 in the globular cluster omega Cen (NGC 5139) is performed, using non-local thermodynamic equilibrium (non-LTE) model atmospheres to estimate stellar atmospheric parameters and chemical composition. Abundances are derived for C, N, O, Mg, Si and S, and compared with those found previously by Moehler et al. We find a general metal underabundance relative to young B-type stars, consistent with the average metallicity of the cluster. Our results indicate that ROA 5701 has not undergone a gas-dust separation scenario as previously suggested. However, its abundance pattern does imply that ROA 5701 has evolved off the AGB prior to the onset of the third dredge-up. 11. Solving the Excitation and Chemical Abundances in Shocks: The Case of HH 1 Giannini, T.; Antoniucci, S.; Nisini, B.; Bacciotti, F.; Podio, L. 2015-11-01 We present deep spectroscopic (3600-24700 Å ) X-shooter observations of the bright Herbig-Haro object HH 1, one of the best laboratories to study the chemical and physical modifications caused by protostellar shocks on the natal cloud. We observe atomic fine structure lines, H i and He i recombination lines and H2 ro-vibrational lines (more than 500 detections in total). Line emission was analyzed by means of Non-local Thermal Equilibiurm codes to derive the electron temperature and density, and for the first time we are able to accurately probe different physical regimes behind a dissociative shock. We find a temperature stratification in the range 4000 K \\div 80,000 K, and a significant correlation between temperature and ionization energy. Two density regimes are identified for the ionized gas, a more tenuous, spatially broad component (density ˜103 cm-3), and a more compact component (density ≥slant 105 cm-3) likely associated with the hottest gas. A further neutral component is also evidenced, having a temperature ≲10,000 K and a density >104 cm-3. The gas fractional ionization was estimated by solving the ionization equilibrium equations of atoms detected in different ionization stages. We find that neutral and fully ionized regions co-exist inside the shock. Also, indications in favor of at least partially dissociative shock as the main mechanism for molecular excitation are derived. Chemical abundances are estimated for the majority of the detected species. On average, abundances of non-refractory/refractory elements are lower than solar of about 0.15/0.5 dex. This indicates the presence of dust inside the medium, with a depletion factor of iron of ˜40%. Based on observations collected at the European Southern Observatory, (92.C-0058). 12. The effect of rotation on the abundances of the chemical elements of the A-type stars in the Praesepe cluster Fossati, L; Landstreet, J; Wade, G; Kochukhov, O; Monier, R; Weiss, W; Gebran, M 2008-01-01 We study how chemical abundances of late B-, A- and early F-type stars evolve with time, and we search for correlations between the abundance of chemical elements and other stellar parameters, such as effective temperature and Vsini. We have observed a large number of B-, A- and F-type stars belonging to open clusters of different ages. In this paper we concentrate on the Praesepe cluster (log t = 8.85), for which we have obtained high resolution, high signal-to-noise ratio spectra of sixteen normal A- and F-type stars and one Am star, using the SOPHIE spectrograph of the Observatoire de Haute-Provence. For all the observed stars, we have derived fundamental parameters and chemical abundances. In addition, we discuss another eight Am stars belonging to the same cluster, for which the abundance analysis had been presented in a previous paper. We find a strong correlation between peculiarity of Am stars and Vsini. The abundance of the elements underabundant in Am stars increases with Vsini, while it decreases f... 13. Detailed Chemical Abundances in NGC 5824: Another Metal-Poor Globular Cluster with Internal Heavy Element Abundance Variations Roederer, Ian U; Bailey, John I; Spencer, Meghin; Crane, Jeffrey D; Shectman, Stephen A 2015-01-01 We present radial velocities, stellar parameters, and detailed abundances of 39 elements derived from high-resolution spectroscopic observations of red giant stars in the luminous, metal-poor globular cluster NGC 5824. We observe 26 stars in NGC 5824 using the Michigan/Magellan Fiber System (M2FS) and two stars using the Magellan Inamori Kyocera Echelle (MIKE) spectrograph. We derive a mean metallicity of [Fe/H]=-1.94+/-0.02 (statistical) +/-0.10 (systematic). The metallicity dispersion of this sample of stars, 0.08 dex, is in agreement with previous work and does not exceed the expected observational errors. Previous work suggested an internal metallicity spread only when fainter samples of stars were considered, so we cannot exclude the possibility of an intrinsic metallicity dispersion in NGC 5824. The M2FS spectra reveal a large internal dispersion in [Mg/Fe], 0.28 dex, which is found in a few other luminous, metal-poor clusters. [Mg/Fe] is correlated with [O/Fe] and anti-correlated with [Na/Fe] and [Al/F... 14. Stokes$IQUV$magnetic Doppler imaging of Ap stars - III. Next generation chemical abundance mapping of Alpha 2 CVn Silvester, James; Wade, Gregg A 2014-01-01 In a previous paper we presented an updated magnetic field map for the chemically peculiar star Alpha 2 CVn using ESPaDOnS and Narval time-resolved high-resolution Stokes$IQUV$spectra. In this paper we focus on mapping various chemical element distributions on the surface of Alpha 2 CVn. With the new magnetic field map and new chemical abundance distributions we can investigate the interplay between the chemical abundance structures and the magnetic field topology on the surface of Alpha 2 CVn. Previous attempts at chemical abundance mapping of Alpha 2 CVn relied on lower resolution data. With our high resolution (R=65,000) dataset we present nine chemical abundance maps for the elements O, Si, Cl, Ti, Cr, Fe, Pr, Nd and Eu. We also derive an updated magnetic field map from Fe and Cr lines in Stokes$IQUV$and O and Cl in Stokes$IV$. These new maps are inferred from line profiles in Stokes$IV$using the magnetic Doppler imaging code Invers10. We examine these new chemical maps and investigate correlations... 15. Chemical Abundances for Evolved Stars in M5: Lithium through Thorium Lai, David K; Bolte, Michael; Johnson, Jennifer A; Lucatello, Sara; Kraft, Robert P; Sneden, Christopher 2011-01-01 We present analysis of high-resolution spectra of a sample of stars in the globular cluster M5 (NGC 5904). The sample includes stars from the red giant branch (seven stars), the red horizontal branch (two stars), and the asymptotic giant branch (eight stars), with effective temperatures ranging from 4000 K to 6100 K. Spectra were obtained with the HIRES spectrometer on the Keck I telescope, with a wavelength coverage from 3700 to 7950 angstroms for the HB and AGB sample, and 5300 to 7600 angstroms for the majority of the RGB sample. We find offsets of some abundance ratios between the AGB and the RGB branches. However, these discrepancies appear to be due to analysis effects, and indicate that caution must be exerted when directly comparing abundance ratios between different evolutionary branches. We find the expected signatures of pollution from material enriched in the products of the hot hydrogen burning cycles such as the CNO, Ne-Na, and Mg-Al cycles, but no significant differences within these signatures... 16. Chemical Abundances in Field Red Giants from High-Resolution H-Band Spectra using the APOGEE Spectral Linelist Smith, Verne V; Shetrone, Matthew D; Meszaros, Szabolcs; Prieto, Carlos Allende; Bizyaev, Dmitry; Perez, Ana Garcia; Majewski, Steven R; Schiavon, Ricardo; Holtzman, Jon; Johnson, Jennifer A 2012-01-01 High-resolution H-band spectra of five bright field K, M, and MS giants, obtained from the archives of the Kitt Peak National Observatory (KPNO) Fourier Transform Spectrometer (FTS), are analyzed to determine chemical abundances of 16 elements. The abundances were derived via spectrum synthesis using the detailed linelist prepared for the SDSS III Apache Point Galactic Evolution Experiment (APOGEE), which is a high-resolution near-infrared spectroscopic survey to derive detailed chemical abundance distributions and precise radial velocities for 100,000 red giants sampling all Galactic stellar populations. Measured chemical abundances include the cosmochemically important isotopes 12C, 13C, 14N, and 16O, along with Mg, Al, Si, K, Ca, Ti, V, Cr, Mn, Fe, Co, Ni, and Cu. A comparison of the abundances derived here with published values for these stars reveals consistent results to ~0.1 dex. The APOGEE spectral region and linelist is, thus, well-suited for probing both Galactic chemical evolution, as well as inter... 17. Reconstructing the star formation history of the Milky Way disc(s) from chemical abundances Snaith, O.; Haywood, M.; Di Matteo, P.; Lehnert, M. D.; Combes, F.; Katz, D.; Gómez, A. 2015-06-01 We develop a chemical evolution model to study the star formation history of the Milky Way. Our model assumes that the Milky Way has formed from a closed-box-like system in the inner regions, while the outer parts of the disc have experienced some accretion. Unlike the usual procedure, we do not fix the star formation prescription (e.g. Kennicutt law) to reproduce the chemical abundance trends. Instead, we fit the abundance trends with age to recover the star formation history of the Galaxy. Our method enables us to recover the star formation history of the Milky Way in the first Gyrs with unprecedented accuracy in the inner (R 9-10 kpc) discs, as sampled in the solar vicinity. We show that half the stellar mass formed during the thick-disc phase in the inner galaxy during the first 4-5 Gyr. This phase was followed by a significant dip in star formation activity (at 8-9 Gyr) and a period of roughly constant lower-level star formation for the remaining 8 Gyr. The thick-disc phase has produced as many metals in 4 Gyr as the thin-disc phase in the remaining 8 Gyr. Our results suggest that a closed-box model is able to fit all the available constraints in the inner disc. A closed-box system is qualitatively equivalent to a regime where the accretion rate maintains a high gas fraction in the inner disc at high redshift. In these conditions the SFR is mainly governed by the high turbulence of the interstellar medium. By z ~ 1 it is possible that most of the accretion takes place in the outer disc, while the star formation activity in the inner disc is mostly sustained by the gas that is not consumed during the thick-disc phase and the continuous ejecta from earlier generations of stars. The outer disc follows a star formation history very similar to that of the inner disc, although initiated at z ~ 2, about 2 Gyr before the onset of the thin-disc formation in the inner disc. 18. Chemical Abundances in 35 Metal-Poor Stars. I. Basic Data Lee, Jeong-Deok; Kim, Kang-Min 2008-01-01 We carried out a homogeneous abundance study for various elements, including$\\alpha$-elements, iron peak elements and$n$-capture elements for 35 metal-poor stars with a wide metallicity range ($-3.0\\lesssim$[Fe/H]$\\lesssim-0.5$). High-resolution ($R\\simeq30$k), high signal-to-noise($S/N\\geq110$) spectra with a wavelength range of 3800 to 10500 \\AA using the Bohyunsan Optical Echelle Spectrograph (BOES). Equivalent widths were measured by means of the Gaussian-fitting method for numerous isolated weak lines of elements. Atmospheric parameters were determined by a self-consistent LTE analysis technique using Fe I and Fe II lines. In this study, we present the EWs of lines and atmospheric parameters for 35 metal-poor stars. 19. Ionization structure and chemical abundances of the Wolf-Rayet nebula NGC 6888 with integral field spectroscopy Fernández-Martín, A.; Martín-Gordón, D.; Vílchez, J. M.; Pérez Montero, E.; Riera, A.; Sánchez, S. F. 2012-05-01 Context. The study of nebulae around Wolf-Rayet (WR) stars gives us clues about the mass-loss history of massive stars, as well as about the chemical enrichment of the interstellar medium (ISM). Aims: This work aims to search for the observational footprints of the interactions between the ISM and stellar winds in the WR nebula NGC 6888 in order to understand its ionization structure, chemical composition, and kinematics. Methods: We have collected a set of integral field spectroscopy observations across NGC 6888, obtained with PPAK in the optical range performing both 2D and 1D analyses. Attending to the 2D analysis in the northeast part of NGC 6888, we have generated maps of the extinction structure and electron density. We produced statistical frequency distributions of the radial velocity and diagnostic diagrams. Furthermore, we performed a thorough study of integrated spectra in nine regions over the whole nebula. Results: The 2D study has revealed two main behaviours. We have found that the spectra of a localized region to the southwest of this pointing can be represented well by shock models assuming n = 1000 cm-3, twice solar abundances, and shock velocities from 250 to 400 km s-1. With the 1D analysis we derived electron densities ranging from <100 to 360 cm-3. The electron temperature varies from ~7700 K to ~10 200 K. A strong variation of up to a factor 10 between different regions in the nitrogen abundance has been found: N/H appears lower than the solar abundance in those positions observed at the edges and very enhanced in the observed inner parts. Oxygen appears slightly underabundant with respect to solar value, whereas the helium abundance is found to be above it. We propose a scenario for the evolution of NGC 6888 to explain the features observed. This scheme consists of a structure of multiple shells: i) an inner and broken shell with material from the interaction between the supergiant and WR shells, presenting an overabundance in N/H and a 20. The magnetic field topology and chemical abundance distributions of the Ap star HD 32633 Silvester, J; Wade, G A 2015-01-01 Previous observations of the Ap star HD 32633 indicated that its magnetic field was unusually complex in nature and could not be characterised by a simple dipolar structure. Here we derive magnetic field maps and chemical abundance distributions for this star using full Stokes vector (Stokes$IQUV$) high-resolution observations obtained with the ESPaDOnS and Narval spectropolarimeters. Our maps, produced using the Invers10 magnetic Doppler imaging (MDI) code, show that HD 32633 has a strong magnetic field which features two large regions of opposite polarity but deviates significantly from a pure dipole field. We use a spherical harmonic expansion to characterise the magnetic field and find that the harmonic energy is predominately in the$\\ell=1$and$\\ell=2$poloidal modes with a small toroidal component. At the same time, we demonstrate that the observed Stokes parameter profiles of HD 32633 cannot be fully described by either a dipolar or dipolar plus quadrupolar field geometry. We compare the magnetic fi... 1. Chemical abundances in low surface brightness galaxies: Implications for their evolution Mcgaugh, S. S.; Bothun, G. D. 1993-01-01 Low Surface Brightness (LSB) galaxies are an important but often neglected part of the galaxy content of the universe. Their importance stems both from the selection effects which cause them to be under-represented in galaxy catalogs, and from what they can tell us about the physical processes of galaxy evolution that has resulted in something other than the traditional Hubble sequence of spirals. An important constraint for any evolutionary model is the present day chemical abundances of LSB disks. Towards this end, spectra for a sample of 75 H 2 regions distributed in 20 LSB disks galaxies were obtained. Structurally, this sample is defined as having B(0) fainter than 23.0 mag arcsec(sup -2) and scale lengths that cluster either around 3 kpc or 10 kpc. In fact, structurally, these galaxies are very similar to the high surface brightness spirals which define the Hubble sequence. Thus, our sample galaxies are not dwarf galaxies but instead have masses comparable to or in excess of the Milky Way. The basic results from these observations are summarized. 2. Effects of episodic gas infall on the chemical abundances in galaxies Köppen, J 2005-01-01 The chemical evolution of galaxies that undergo an episode of massive and rapid accretion of metal-poor gas is investigated with models using both simplified and detailed nucleosynthesis recipes. The rapid decrease of the oxygen abundance during infall is followed by a slower evolution which leads back to the closed-box relation, thus forming a loop in the N/O-O/H diagram. For large excursions from the closed-box relation, the mass of the infalling material needs to be substantially larger than the gas remaining in the galaxy, and the accretion rate should be larger than the star formation rate. We apply this concept to the encounter of high velocity clouds with galaxies of various masses, finding that the observed properties of these clouds are indeed able to cause substantial effects not only in low mass galaxies, but also in the partial volumes in large massive galaxies that would be affected by the collision. Numerical models with detailed nucleosynthesis prescriptions are constructed. We assume star form... 3. Detailed Abundance Analysis of the Brightest Star in Segue 2, the Least Massive Galaxy Roederer, Ian U 2014-01-01 We present the first high resolution spectroscopic observations of one red giant star in the ultra-faint dwarf galaxy Segue 2, which has the lowest total mass (including dark matter) estimated for any known galaxy. These observations were made using the MIKE spectrograph on the Magellan II Telescope at Las Campanas Observatory. We perform a standard abundance analysis of this star, SDSS J021933.13+200830.2, and present abundances of 21 species of 18 elements as well as upper limits for 25 additional species. We derive [Fe/H] = -2.9, in excellent agreement with previous estimates from medium resolution spectroscopy. Our main result is that this star bears the chemical signatures commonly found in field stars of similar metallicity. The heavy elements produced by neutron-capture reactions are present, but they are deficient at levels characteristic of stars in other ultra-faint dwarf galaxies and a few luminous dwarf galaxies. The otherwise normal abundance patterns suggest that the gas from which this star for... 4. Analysis and modeling of scale-invariance in plankton abundance Pelletier, J D 1996-01-01 The power spectrum,$S$, of horizontal transects of plankton abundance are often observed to have a power-law dependence on wavenumber,$k$, with exponent close to$-2$:$S(k)\\propto k^{-2}$over a wide range of scales. I present power spectral analyses of aircraft lidar measurements of phytoplankton abundance from scales of 1 to 100 km. A power spectrum$S(k)\\propto k^{-2}$is obtained. As a model for this observation, I consider a stochastic growth equation where the rate of change of plankton abundance is determined by turbulent mixing, modeled as a diffusion process in two dimensions, and exponential growth with a stochastically variable net growth rate representing a fluctuating environment. The model predicts a lognormal distribution of abundance and a power spectrum of horizontal transects$S(k)\\propto k^{-1.8}$, close to the observed spectrum. The model equation predicts that the power spectrum of variations in abundance in time at a point in space is$S(f)\\propto f^{-1.5}$(where$f$is the frequency... 5. Ion Mobility Mass Spectrometry Direct Isotope Abundance Analysis The nuclear forensics community is currently engaged in the analysis of illicit nuclear or radioactive material for the purposes of non-proliferations and attribution. One technique commonly employed for gathering nuclear forensics information is isotope analysis. At present, the state-of-the-art methodology for obtaining isotopic distributions is thermal ionization mass spectrometry (TIMS). Although TIMS is highly accurate at determining isotope distributions, the technique requires an elementally pure sample to perform the measurement. The required radiochemical separations give rise to sample preparation times that can be in excess of one to two weeks. Clearly, the nuclear forensics community is in need of instrumentation and methods that can expedite their decision making process in the event of a radiological release or nuclear detonation. Accordingly, we are developing instrumentation that couples a high resolution IM drift cell to the front end of a MS. The IM cell provides a means of separating ions based upon their collision cross-section and mass-to-charge ratio (m/z). Two analytes with the same m/z, but with different collision cross-sections (shapes) would exit the cell at different times, essentially enabling the cell to function in a similar manner to a gas chromatography (GC) column. Thus, molecular and atomic isobaric interferences can be effectively removed from the ion beam. The mobility selected chemical species could then be introduced to a MS for high-resolution mass analysis to generate isotopic distributions of the target analytes. The outcome would be an IM/MS system capable of accurately measuring isotopic distributions while concurrently eliminating isobaric interferences and laboratory radiochemical sample preparation. The overall objective of this project is developing instrumentation and methods to produce near real-time isotope distributions with a modular mass spectrometric system that performs the required gas-phase chemistry and 6. Ion Mobility Mass Spectrometry Direct Isotope Abundance Analysis Manuel J. Manard, Stephan Weeks, Kevin Kyle 2010-05-27 The nuclear forensics community is currently engaged in the analysis of illicit nuclear or radioactive material for the purposes of non-proliferations and attribution. One technique commonly employed for gathering nuclear forensics information is isotope analysis. At present, the state-of-the-art methodology for obtaining isotopic distributions is thermal ionization mass spectrometry (TIMS). Although TIMS is highly accurate at determining isotope distributions, the technique requires an elementally pure sample to perform the measurement. The required radiochemical separations give rise to sample preparation times that can be in excess of one to two weeks. Clearly, the nuclear forensics community is in need of instrumentation and methods that can expedite their decision making process in the event of a radiological release or nuclear detonation. Accordingly, we are developing instrumentation that couples a high resolution IM drift cell to the front end of a MS. The IM cell provides a means of separating ions based upon their collision cross-section and mass-to-charge ratio (m/z). Two analytes with the same m/z, but with different collision cross-sections (shapes) would exit the cell at different times, essentially enabling the cell to function in a similar manner to a gas chromatography (GC) column. Thus, molecular and atomic isobaric interferences can be effectively removed from the ion beam. The mobility selected chemical species could then be introduced to a MS for high-resolution mass analysis to generate isotopic distributions of the target analytes. The outcome would be an IM/MS system capable of accurately measuring isotopic distributions while concurrently eliminating isobaric interferences and laboratory radiochemical sample preparation. The overall objective of this project is developing instrumentation and methods to produce near real-time isotope distributions with a modular mass spectrometric system that performs the required gas-phase chemistry and 7. Accurate abundance analysis of late-type stars: advances in atomic physics Barklem, Paul S 2016-01-01 The measurement of stellar properties such as chemical compositions, masses and ages, through stellar spectra, is a fundamental problem in astrophysics. Progress in the understanding, calculation and measurement of atomic properties and processes relevant to the high-accuracy analysis of F-, G-, and K-type stellar spectra is reviewed, with particular emphasis on abundance analysis. This includes fundamental atomic data such as energy levels, wavelengths, and transition probabilities, as well as processes of photoionisation, collisional broadening and inelastic collisions. A recurring theme throughout the review is the interplay between theoretical atomic physics, laboratory measurements, and astrophysical modelling, all of which contribute to our understanding of atoms and atomic processes, as well as to modelling stellar spectra. 8. Boom in boarfish abundance: insight from otolith analysis Coad, Julie Olivia; Hüssy, Karin 2012-01-01 The boarfish Capros aper is a pelagic shoaling species widely distributed along the Northeast Atlantic continental shelf. In recent years, this species has experienced a dramatic boom in abundance in the Bay of Biscay and Celtic Sea. This study aims at resolving the mechanisms responsible...... was not correlated with growth in the same year. However, year‐class strength was significantly correlated with adult growth the previous year, together with temperature during the months following spawning. The age structure shows that this species is very long lived (>30 years), but that a considerable proportion...... abundance... 9. Parent Stars of Extrasolar Planets. VIII. Chemical Abundances for 18 Elements in 31 Stars Gonzalez, Guillermo; Laws, Chris 2007-01-01 We present the results of detailed spectroscopic abundance analyses for 18 elements in 31 nearby stars with planets. The resulting abundances are combined with other similar studies of nearby stars with planets and compared to a sample of nearby stars without detected planets. We find some evidence for abundance differences between these two samples for Al, Si and Ti. Some of our results are in conflict with a recent study of stars with planets in the SPOCS database. We encourage continued st... 10. How to link the relative abundances of gas species in coma of comets to their initial chemical composition ? Marboeuf, Ulysse 2014-01-01 The chemical composition of comets is frequently assumed to be directly provided by the observations of the abundances of volatile molecules in the coma. The present work aims to determine the relationship between the chemical composition of the coma, the outgassing profile of volatile molecules and the internal chemical composition, and water ice structure of the nucleus, and physical assumptions on comets. To do this, we have developed a quasi 3D model of a cometary nucleus which takes into account all phase changes and water ice structures (amorphous, crystalline, clathrate, and a mixture of them); we have applied this model to the comet 67P/Churyumov-Gerasimenko, the target of the Rosetta mission. We find that the outgassing profile of volatile molecules is a strong indicator of the physical and thermal properties (water ice structure, thermal inertia, abundances, distribution, physical differentiation) of the solid nucleus. Day/night variations of the rate of production of species helps to distinguish th... 11. Detailed Abundances of Planet-Hosting Wide Binaries. I. Did Planet Formation Imprint Chemical Signatures in the Atmospheres of HD 20782/81? Mack, Claude E; Stassun, Keivan G; Pepper, Joshua; Norris, John 2014-01-01 Using high-resolution echelle spectra obtained with Magellan/MIKE, we present a chemical abundance analysis of both stars in the planet-hosting wide binary system HD20782 + HD20781. Both stars are G dwarfs, and presumably coeval, forming in the same molecular cloud. Therefore we expect that they should possess the same bulk metallicities. Furthermore, both stars also host giant planets on eccentric orbits with pericenters$\\lesssim 0.2\\,$AU. We investigate if planets with such orbits could lead to the host stars ingesting material, which in turn may leave similar chemical imprints in their atmospheric abundances. We derived abundances of 15 elements spanning a range of condensation temperatures ($T_{C}\\approx 40-1660\\,$K). The two stars are found to have a mean element-to-element abundance difference of$0.04\\pm0.07\\,$dex, which is consistent with both stars having identical bulk metallicities. In addition, for both stars, the refractory elements ($T_{C} > 900\\,$K) exhibit a positive correlation between a... 12. Galaxy pairs in cosmological simulations: effects of interactions on colours and chemical abundances Perez, M J; Lambas, D G; Scannapieco, C; Tissera, P B; Lambas, Diego G.; Rossi, Maria E. De; Scannapieco, Cecilia; Tissera, Patricia B. 2006-01-01 We perform an statistical analysis of galaxies in pairs in a Lambda-CDM scenario by using the chemical GADGET-2 of Scannapieco et al. (2005) in order to study the effects of galaxy interactions on colours and metallicities. We find that galaxy-galaxy interactions can produce a bimodal colour distribution with galaxies with significant recent star formation activity contributing mainly to blue colours. In the simulations, the colours and the fractions of recently formed stars of galaxies in pairs depend on environment more strongly than those of galaxies without a close companion, suggesting that interactions play an important role in galaxy evolution. If the metallicity of the stellar populations is used as the chemical indicator, we find that the simulated galaxies determine luminosity-metallicity and stellar mass-metallicity relations which do not depend on the presence of a close companion. However, in the case of the luminosity-metallicity relation, at a given level of enrichment, we detect a systematic d... 13. VizieR Online Data Catalog: Chemical abundances of zeta Reticuly (Adibekyan+, 2016) Adibekyan, V.; Delgado-Mena, E.; Figueira, P.; Sousa, S. G.; Santos, N. C.; Faria, J. P.; Gonzalez Hernandez, J. I.; Israelian, G.; Harutyunyan, G.; Suarez-Andres, L.; Hakobyan, A. A. 2016-05-01 The file table1.dat lists stellar parameters, S/N, and observation dates of zeta1 Ret and zeta2 Ret derived from individual and combined spectra The file ew.dat lists the equivalent widths (EW) of all the spectral lines. The file s_lines.dat lists the lines that were used in this study. The file abund.dat lists the derived abundances of the elements for each star and spectra. (4 data files). 14. Chemical Abundances in the Secondary Star of the Neutron Star Binary Centaurus X-4 Hern'andez, J I G; Israelian, G; Casares, J; Maeda, K; Bonifacio, P; Molaro, P; Hern\\'andez, Jonay I. Gonz\\'alez; Rebolo, Rafael; Israelian, Garik; Casares, Jorge; Maeda, Keiichi; Bonifacio, Piercarlo; Molaro, Paolo 2005-01-01 Using a high resolution spectrum of the secondary star in the neutron star binary {Cen X-4}, we have derived the stellar parameters and veiling caused by the accretion disk in a consistent way. We have used a$\\chi^{2}$minimization procedure to explore a grid of 1 500 000 LTE synthetic spectra computed for a plausible range of both stellar and veiling parameters. Adopting the best model parameters found, we have determined atmospheric abundances of Fe, Ca, Ti, Ni and Al. These element abundances are super solar ($\\mathrm{[Fe/H]}=0.23 \\pm 0.10$), but only the abundance of Ti and Ni appear to be moderately enhanced ($\\ge1\\sigma$) as compared with the average values of stars of similar iron content. These element abundances can be explained if the secondary star captured a significant amount of matter ejected from a spherically symmetric supernova explosion of a 4 {$M_\\odot$} He core progenitor and assuming solar abundances as primordial abundances in the secondary star. The kinematic properties of the system i... 15. Chemical Abundances of Seven Irregular and Three Tidal Dwarf Galaxies in the M81 Group Croxall, Kevin V; Lee, Henry; Skillman, Evan D; Lee, Janice C; Côté, Stéphanie; Kennicutt, Robert C; Miller, Bryan W; 10.1088/0004-637X/705/1/723 2009-01-01 We have derived nebular abundances for 10 dwarf galaxies belonging to the M81 Group, including several galaxies which do not have abundances previously reported in the literature. For each galaxy, multiple H \\ii regions were observed with GMOS-N at the Gemini Observatory in order to determine abundances of several elements (oxygen, nitrogen, sulfur, neon, and argon). For seven galaxies, at least one H \\ii region had a detection of the temperature sensitive [OIII]$\\lambda$4363 line, allowing a "direct" determination of the oxygen abundance. No abundance gradients were detected in the targeted galaxies and the observed oxygen abundances are typically in agreement with the well known metallicity-luminosity relation. However, three candidate "tidal dwarf" galaxies lie well off this relation, UGC 5336, Garland, and KDG 61. The nature of these systems suggests that UGC 5336 and Garland are indeed recently formed systems, whereas KDG 61 is most likely a dwarf spheroidal galaxy which lies along the same line of sigh... 16. Chemical analysis by nuclear techniques This state art report consists of four parts, production of micro-particles, analysis of boron, alpha tracking method and development of neutron induced prompt gamma ray spectroscopy (NIPS) system. The various methods for the production of micro-paticles such as mechanical method, electrolysis method, chemical method, spray method were described in the first part. The second part contains sample treatment, separation and concentration, analytical method, and application of boron analysis. The third part contains characteristics of alpha track, track dectectors, pretreatment of sample, neutron irradiation, etching conditions for various detectors, observation of track on the detector, etc. The last part contains basic theory, neutron source, collimator, neutron shields, calibration of NIPS, and application of NIPS system 17. Chemical solver to compute molecule and grain abundances and non-ideal MHD resistivities in prestellar core-collapse calculations Marchand, P.; Masson, J.; Chabrier, G.; Hennebelle, P.; Commerçon, B.; Vaytet, N. 2016-07-01 We develop a detailed chemical network relevant to calculate the conditions that are characteristic of prestellar core collapse. We solve the system of time-dependent differential equations to calculate the equilibrium abundances of molecules and dust grains, with a size distribution given by size-bins for these latter. These abundances are used to compute the different non-ideal magneto-hydrodynamics resistivities (ambipolar, Ohmic and Hall), needed to carry out simulations of protostellar collapse. For the first time in this context, we take into account the evaporation of the grains, the thermal ionisation of potassium, sodium, and hydrogen at high temperature, and the thermionic emission of grains in the chemical network, and we explore the impact of various cosmic ray ionisation rates. All these processes significantly affect the non-ideal magneto-hydrodynamics resistivities, which will modify the dynamics of the collapse. Ambipolar diffusion and Hall effect dominate at low densities, up to nH = 1012 cm-3, after which Ohmic diffusion takes over. We find that the time-scale needed to reach chemical equilibrium is always shorter than the typical dynamical (free fall) one. This allows us to build a large, multi-dimensional multi-species equilibrium abundance table over a large temperature, density and ionisation rate ranges. This table, which we make accessible to the community, is used during first and second prestellar core collapse calculations to compute the non-ideal magneto-hydrodynamics resistivities, yielding a consistent dynamical-chemical description of this process. The multi-dimensional multi-species equilibrium abundance table and a copy of the code are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/592/A18 18. Astrophysical Origins for the Unusual Chemical Abundance of the Globular Cluster Palomar 1 Niu, Ping; Zhang, Bo; Xie, Geying 2015-01-01 We study the abundances of {\\alpha} elements, Fe-peak elements, and neutron-capture elements in Pal 1. We found that the abundances of the SNe Ia and main s-process components of Pal 1 are larger than those of the disk stars and the abundances of the primary component of Pal 1 are smaller than those of the disk stars with similar metallicity. The Fe abundances of Pal 1 and the disk stars mainly originate from the SNe Ia and the primary component, respectively. Although the {\\alpha} abundances dominantly produced by the primary process for the disk stars and Pal 1, the contributions of the primary component to Pal 1 are smaller than the corresponding contributions to the disk stars. The Fe-peak elements V and Co mainly originate from the primary and secondary components for the disk stars and Pal 1, but the contributions of the massive stars to Pal 1 are lower than those of the massive stars to the disk stars. The Yabundances mainly originate from the weak r-component for the disk stars. However, the contribut... 19. Dust and Chemical Abundances of the Sagittarius dwarf Galaxy Planetary Nebula Hen2-436 Otsuka, Masaaki; Riebel, David; Hyung, Siek; Tajitsu, Akito; Izumiura, Hideyuki 2010-01-01 We have estimated elemental abundances of the planetary nebula (PN) Hen2-436 in the Sagittarius (Sgr) spheroidal dwarf galaxy using ESO/VLT FORS2, Magellan/MMIRS, and Spitzer/IRS spectra. We have detected candidates of [F II] 4790A, [Kr III] 6826A, and [P II] 7875A lines and successfully estimated the abundances of these elements ([F/H]=+1.23, [Kr/H]=+0.26, [P/H]=+0.26) for the first time. We present a relation between C, F, P, and Kr abundances among PNe and C-rich stars. The detections of F and Kr support the idea that F and Kr together with C are synthesized in the same layer and brought to the surface by the third dredge-up. We have estimated the N^2+ and O^2+ abundances using optical recombination lines (ORLs) and collisionally excited lines (CELs). The discrepancy between the abundance derived from the O ORL and that derived from the O CEL is >1 dex. To investigate the status of the central star of the PN, nebula condition, and dust properties, we construct a theoretical SED model with CLOUDY. By compar... 20. Hydrogen Atom Collision Processes in Cool Stellar Atmospheres: Effects on Spectral Line Strengths and Measured Chemical Abundances in Old Stars The precise measurement of the chemical composition of stars is a fundamental problem relevant to many areas of astrophysics. State-of-the-art approaches attempt to unite accurate descriptions of microphysics, non-local thermodynamic equilibrium (non-LTE) line formation and 3D hydrodynamical model atmospheres. In this paper I review progress in understanding inelastic collisions of hydrogen atoms with other species and their influence on spectral line formation and derived abundances in stellar atmospheres. These collisions are a major source of uncertainty in non-LTE modelling of spectral lines and abundance determinations, especially for old, metal-poor stars, which are unique tracers of the early evolution of our galaxy. Full quantum scattering calculations of direct excitation processes X(nl) + H ↔ X(n'l') + H and charge transfer processes X(nl) + H ↔ X+ + H− have been done for Li, Na and Mg [1,2,3] based on detailed quantum chemical data, e.g. [4]. Rate coefficients have been calculated and applied to non-LTE modelling of spectral lines in stellar atmospheres [5,6,7,8,9]. In all cases we find that charge transfer processes from the first excited S-state are very important, and the processes affect measured abundances for Li, Na and Mg in some stars by as much as 60%. Effects vary with stellar parameters (e.g. temperature, luminosity, metal content) and so these processes are important not only for accurate absolute abundances, but also for relative abundances among dissimilar stars. 1. Magnetic Field and Atmospheric Chemical Abundances of the Magnetic Ap Star HD 318107 Bailey, J D; Bagnulo, S; Fossati, L; Kochukhov, O; Paladini, C; Silvester, J; Wade, G 2011-01-01 New spectra have been obtained with the ESPaDOnS spectropolarimeter supplemented with unpolarised spectra from the ESO UVES, UVES-FLAMES, and HARPS spectrographs of the very peculiar large-field magnetic Ap star HD 318107, a member of the open cluster NGC 6405. The available data provide sufficient material with which to re-analyse the first-order model of the magnetic field geometry and to derive abundances of Si, Ti, Fe, Nd, Pr, Mg, Cr, Mn, O, and Ca. The magnetic field structure was modelled with a low-order colinear multipole expansion, using coefficients derived from the observed variations of the field strength with rotation phase. The abundances of several elements were determined using spectral synthesis. After experiments with a very simple model of uniform abundance on each of three rings of equal width in co-latitude and symmetric about the assumed magnetic axis, we decided to model the spectra assuming uniform abundances of each element over the stellar surface. The new magnetic field measurements... 2. Lithium abundances in nearby FGK dwarf and subgiant stars: internal destruction, Galactic chemical evolution, and exoplanets Ramirez, I; Lambert, D L; Prieto, C Allende 2012-01-01 We derive atmospheric parameters and lithium abundances for 671 stars and include our measurements in a literature compilation of 1381 dwarf and subgiant stars. First, a "lithium desert" in the effective temperature (Teff) versus lithium abundance (A_Li) plane is observed such that no stars with Teff~6075 K and A_Li~1.8 are found. We speculate that most of the stars on the low A_Li side of the desert have experienced a short-lived period of severe surface lithium destruction as main-sequence or subgiant stars. Next, we search for differences in the lithium content of thin-disk and thick-disk stars, but we find that internal processes have erased from the stellar photospheres their possibly different histories of lithium enrichment. Nevertheless, we note that the maximum lithium abundance of thick-disk stars is nearly constant from [Fe/H]=-1.0 to -0.1, at a value that is similar to that measured in very metal-poor halo stars (A_Li~2.2). Finally, differences in the lithium abundance distribution of known planet... 3. The Cannon 2: A data-driven model of stellar spectra for detailed chemical abundance analyses Casey, Andrew R; Ness, Melissa; Rix, Hans-Walter; Ho, Anna Q Y; Gilmore, Gerry 2016-01-01 We have shown that data-driven models are effective for inferring physical attributes of stars (labels; Teff, logg, [M/H]) from spectra, even when the signal-to-noise ratio is low. Here we explore whether this is possible when the dimensionality of the label space is large (Teff, logg, and 15 abundances: C, N, O, Na, Mg, Al, Si, S, K, Ca, Ti, V, Mn, Fe, Ni) and the model is non-linear in its response to abundance and parameter changes. We adopt ideas from compressed sensing to limit overall model complexity while retaining model freedom. The model is trained with a set of 12,681 red-giant stars with high signal-to-noise spectroscopic observations and stellar parameters and abundances taken from the APOGEE Survey. We find that we can successfully train and use a model with 17 stellar labels. Validation shows that the model does a good job of inferring all 17 labels (typical abundance precision is 0.04 dex), even when we degrade the signal-to-noise by discarding ~50% of the observing time. The model dependencie... 4. Chemical Abundances for 855 Giants in the Globular Cluster Omega Centauri (NGC 5139) Johnson, Christian I 2010-01-01 We present elemental abundances for 855 red giant branch (RGB) stars in the globular cluster Omega Centauri (w Cen) from spectra obtained with the Blanco 4m telescope and Hydra multifiber spectrograph. The sample includes nearly all RGB stars brighter than V=13.5, and span's w Cen's full metallicity range. The heavy alpha elements (Si, Ca, and Ti) are generally enhanced by ~+0.3 dex, and exhibit a metallicity dependent morphology that may be attributed to mass and metallicity dependent Type II supernova (SN) yields. The heavy alpha and Fe-peak abundances suggest minimal contributions from Type Ia SNe. The light elements (O, Na, and Al) exhibit >0.5 dex abundance dispersions at all metallicities, and a majority of stars with [Fe/H]>-1.6 have [O/Fe], [Na/Fe], and [Al/Fe] abundances similar to those in monometallic globular clusters, as well as O-Na, O-Al anticorrelations and the Na-Al correlation in all but the most metal-rich stars. A combination of pollution from intermediate mass asymptotic giant branch (AGB... 5. The RAVE-on catalog of stellar atmospheric parameters and chemical abundances for chemo-dynamic studies in the Gaia era Casey, Andrew R; Hogg, David W; Ness, Melissa; Walter-Rix, Hans; Kordopatis, Georges; Kunder, Andrea; Steinmetz, Matthias; Koposov, Sergey; Enke, Harry; Sanders, Jason; Gilmore, Gerry; Zwitter, Tomaž; Freeman, Kenneth C; Casagrande, Luca; Matijevič, Gal; Seabroke, George; Bienaymé, Olivier; Bland-Hawthorn, Joss; Gibson, Brad K; Grebel, Eva K; Helmi, Amina; Munari, Ulisse; Navarro, Julio F; Reid, Warren; Siebert, Arnaud; Wyse, Rosemary 2016-01-01 The orbits, atmospheric parameters, chemical abundances, and ages of individual stars in the Milky Way provide the most comprehensive illustration of galaxy formation available. The Tycho-Gaia Astrometric Solution (TGAS) will deliver astrometric parameters for the largest ever sample of Milky Way stars, though its full potential cannot be realized without the addition of complementary spectroscopy. Among existing spectroscopic surveys, the RAdial Velocity Experiment (RAVE) has the largest overlap with TGAS ($\\gtrsim$200,000 stars). We present a data-driven re-analysis of 520,781 RAVE spectra using The Cannon. For red giants, we build our model using high-fidelity APOGEE stellar parameters and abundances for stars that overlap with RAVE. For main-sequence and sub-giant stars, our model uses stellar parameters from the K2/EPIC. We derive and validate effective temperature$T_{\\rm eff}$, surface gravity$\\log{g}$, and chemical abundances of up to seven elements (O, Mg, Al, Si, Ca, Fe, Ni). We report a total of 1... 6. Analysis Method for Isotope Abundance of 13C-urea In order to better control the effective content of 13C in 13C-urea reagent, the technique and the conditions for converting 13C-urea sample into the sample gas used in gas isotopic mass spectrometry detection by means of nitrite oxidation method and high temperature burning method were investigated. The results showed that the 13CO2 gas obtained from nitrite oxidation method with 2 mg 13C-urea samples and that from high temperature burning method with 1 mg 13C-urea samples can satisfy the demand of the mass spectrometer detection. The sodium nitrite reagent dosage, the reaction temperature and the reaction time of the sample gas preparation, as well as the treatment effect of copper oxide reagent etc.were sought experimentally. The high abundance 13C-urea testing was completed, the calculation and expression of the detection data were also determined, and the standard deviation were less than ±0.07%. (authors) 7. DUST AND CHEMICAL ABUNDANCES OF THE SAGITTARIUS DWARF GALAXY PLANETARY NEBULA Hen2-436 We have estimated elemental abundances of the planetary nebula (PN) Hen2-436 in the Sagittarius (Sgr) spheroidal dwarf galaxy using ESO/VLT FORS2, Magellan/MMIRS, and Spitzer/IRS spectra. We have detected candidates of fluorine [F II] λ4790, krypton [Kr III] λ6826, and phosphorus [P II] λ7875 lines and successfully estimated the abundances of these elements ([F/H] = +1.23, [Kr/H] = +0.26, [P/H] = +0.26) for the first time. These elements are known to be synthesized by the neutron capture process in the He-rich intershell during the thermally pulsing asymptotic giant branch (AGB) phase. We present a relation between C, F, P, and Kr abundances among PNe and C-rich stars. The detections of these elements in Hen2-436 support the idea that F, P, Kr together with C are synthesized in the same layer and brought to the surface by the third dredge-up. We have detected N II and O II optical recombination lines (ORLs) and derived the N2+ and O2+ abundances. The discrepancy between the abundance derived from the oxygen ORL and that derived from the collisionally excited line is >1 dex. To investigate the status of the central star of the PN, nebula condition, and dust properties, we construct a theoretical spectral energy distribution (SED) model to match the observed SED with CLOUDY. By comparing the derived luminosity and temperature of the central star with theoretical evolutionary tracks, we conclude that the initial mass of the progenitor is likely to be ∼1.5-2.0 Msun and the age is ∼3000 yr after the AGB phase. The observed elemental abundances of Hen2-436 can be explained by a theoretical nucleosynthesis model with a star of initial mass 2.25 Msun, Z = 0.008, and LMC compositions. We have estimated the dust mass to be 2.9x10-4 Msun (amorphous carbon only) or 4.0x10-4 Msun (amorphous carbon and polycyclic aromatic hydrocarbon). Based on the assumption that most of the observed dust is formed during the last two thermal pulses and the dust-to-gas mass ratio is 5 8. The Gaia-ESO Survey: Sodium and aluminium abundances in giants and dwarfs - Implications for stellar and Galactic chemical evolution Smiljanic, R; Bragaglia, A; Donati, P; Magrini, L; Friel, E; Jacobson, H; Randich, S; Ventura, P; Lind, K; Bergemann, M; Nordlander, T; Morel, T; Pancino, E; Tautvaisiene, G; Adibekyan, V; Tosi, M; Vallenari, A; Gilmore, G; Bensby, T; Francois, P; Koposov, S; Lanzafame, A C; Recio-Blanco, A; Bayo, A; Carraro, G; Casey, A R; Costado, M T; Franciosini, E; Heiter, U; Hill, V; Hourihane, A; Jofre, P; Lardo, C; de Laverny, P; Lewis, J; Monaco, L; Morbidelli, L; Sacco, G G; Sbordone, L; Sousa, S G; Worley, C C; Zaggia, S 2016-01-01 Stellar evolution models predict that internal mixing should cause some sodium overabundance at the surface of red giants more massive than ~ 1.5--2.0 Msun. The surface aluminium abundance should not be affected. Nevertheless, observational results disagree about the presence and/or the degree of the Na and Al overabundances. In addition, Galactic chemical evolution models adopting different stellar yields lead to quite different predictions for the behavior of [Na/Fe] and [Al/Fe] versus [Fe/H]. Overall, the observed trends of these abundances with metallicity are not well reproduced. We readdress both issues, using new Na and Al abundances determined within the Gaia-ESO Survey, using two samples: i) more than 600 dwarfs of the solar neighborhood and of open clusters and ii) low- and intermediate-mass clump giants in six open clusters. Abundances of Na in giants with mass below ~2.0 Msun, and of Al in giants below ~3.0 Msun, seem to be unaffected by internal mixing processes. For more massive giants, the Na o... 9. Chemical Abundances of Seven Outer Halo M31 Globular Clusters from the Pan-Andromeda Archaeological Survey Sakari, Charli M 2016-01-01 Observations of stellar streams in M31's outer halo suggest that M31 is actively accreting several dwarf galaxies and their globular clusters (GCs). Detailed abundances can chemically link clusters to their birth environments, establishing whether or not a GC has been accreted from a satellite dwarf galaxy. This talk presents the detailed chemical abundances of seven M31 outer halo GCs (with projected distances from M31 greater than 30 kpc), as derived from high-resolution integrated-light spectra taken with the Hobby Eberly Telescope. Five of these clusters were recently discovered in the Pan-Andromeda Archaeological Survey (PAndAS)---this talk presents the first determinations of integrated Fe, Na, Mg, Ca, Ti, Ni, Ba, and Eu abundances for these clusters. Four of the target clusters (PA06, PA53, PA54, and PA56) are metal-poor ($[\\rm{Fe/H}] < -1.5$),$\\alpha$-enhanced (though they are possibly less alpha-enhanced than Milky Way stars at the 1 sigma level), and show signs of star-to-star Na and Mg variatio... 10. Detailed abundances of planet-hosting wide binaries. I. Did planet formation imprint chemical signatures in the atmospheres of HD 20782/81? Using high-resolution, high signal-to-noise echelle spectra obtained with Magellan/MIKE, we present a detailed chemical abundance analysis of both stars in the planet-hosting wide binary system HD 20782 + HD 20781. Both stars are G dwarfs, and presumably coeval, forming in the same molecular cloud. Therefore we expect that they should possess the same bulk metallicities. Furthermore, both stars also host giant planets on eccentric orbits with pericenters ≲0.2 AU. Here, we investigate if planets with such orbits could lead to the host stars ingesting material, which in turn may leave similar chemical imprints in their atmospheric abundances. We derived abundances of 15 elements spanning a range of condensation temperature, T C ≈ 40-1660 K. The two stars are found to have a mean element-to-element abundance difference of 0.04 ± 0.07 dex, which is consistent with both stars having identical bulk metallicities. In addition, for both stars, the refractory elements (T C >900 K) exhibit a positive correlation between abundance (relative to solar) and T C, with similar slopes of ≈1×10–4 dex K–1. The measured positive correlations are not perfect; both stars exhibit a scatter of ≈5×10–5 dex K–1 about the mean trend, and certain elements (Na, Al, Sc) are similarly deviant in both stars. These findings are discussed in the context of models for giant planet migration that predict the accretion of H-depleted rocky material by the host star. We show that a simple simulation of a solar-type star accreting material with Earth-like composition predicts a positive—but imperfect—correlation between refractory elemental abundances and T C. Our measured slopes are consistent with what is predicted for the ingestion of 10-20 Earths by each star in the system. In addition, the specific element-by-element scatter might be used to distinguish between planetary accretion and Galactic chemical evolution scenarios. 11. A spectroscopic study of chemical abundances in the globular cluster Omega Centauri Blue spectra at a resolution of 0.5 A of red giants in the globular clusters Omega Centauri and NGCs 288, 362, 6397 and 6809 (M55) have been obtained with the Anglo-Australian Telescope. The observations were made to test Sweigart and Mengel's [Astrophy S. J. 229, 624] theory of mixing of nuclearly-processed material to the star's surface, and to elucidate the relationship between primordial and evolutionary origins for the range in abundance within Omega Cen. The Omega Cen stars were chosen in two groups either side of the giant branch, covering the luminosity range where the onset of mixing was predicted to occur. Abundances of C, N, Fe and other heavy elements have been determined by fitting synthetic spectra, calculated from model atmospheres, to the observational data. (author) 12. Chemical Abundances in NGC 5024 (M53): A Mostly First Generation Globular Cluster Boberg, Owen M.; Friel, Eileen D.; Vesperini, Enrico 2016-06-01 We present the Fe, Ca, Ti, Ni, Ba, Na, and O abundances for a sample of 53 red giant branch stars in the globular cluster (GC) NGC 5024 (M53). The abundances were measured from high signal-to-noise medium resolution spectra collected with the Hydra multi-object spectrograph on the Wisconsin–Indiana–Yale–NOAO 3.5 m telescope. M53 is of interest because previous studies based on the morphology of the cluster’s horizontal branch suggested that it might be composed primarily of first generation (FG) stars and differ from the majority of other GCs with multiple populations, which have been found to be dominated by the second generation (SG) stars. Our sample has an average [Fe/H] = ‑2.07 with a standard deviation of 0.07 dex. This value is consistent with previously published results. The alpha-element abundances in our sample are also consistent with the trends seen in Milky Way halo stars at similar metallicities, with enhanced [Ca/Fe] and [Ti/Fe] relative to solar. We find that the Na–O anti-correlation in M53 is not as extended as other GCs with similar masses and metallicities. The ratio of SG to the total number of stars in our sample is approximately 0.27 and the SG generation is more centrally concentrated. These findings further support that M53 might be a mostly FG cluster and could give further insight into how GCs formed the light element abundance patterns we observe in them today. 13. GALA: an automatic tool for the abundance analysis of stellar spectra Mucciarelli, A; Lovisi, L; Ferraro, F R; Lapenna, E 2013-01-01 GALA is a freely distributed Fortran code to derive automatically the atmospheric parameters (temperature, gravity, microturbulent velocity and overall metallicity) and abundances for individual species of stellar spectra using the classical method based on the equivalent widths of metallic lines. The abundances of individual spectral lines are derived by using the WIDTH9 code developed by R. L. Kurucz. GALA is designed to obtain the best model atmosphere, by optimizing temperature, surface gravity, microturbulent velocity and metallicity, after rejecting the discrepant lines. Finally, it computes accurate internal errors for each atmospheric parameter and abundance. The code permits to obtain chemical abundances and atmospheric parameters for large stellar samples in a very short time, thus making GALA an useful tool in the epoch of the multi-object spectrographs and large surveys. An extensive set of tests with both synthetic and observed spectra is performed and discussed to explore the capabilities and ro... 14. Chemical Abundances of Planetary Nebulae in the Substructures of M31 Fang, Xuan; Guerrero, Martin A; Liu, Xiaowei; Yuan, Haibo; Zhang, Yong; Zhang, Bing 2015-01-01 We present deep spectroscopy of planetary nebulae (PNe) that are associated with the substructures of the Andromeda Galaxy (M31). The spectra were obtained with the OSIRIS spectrograph on the 10.4 m GTC. Seven targets were selected for the observations, three in the Northern Spur and four associated with the Giant Stream. The most distant target in our sample, with a rectified galactocentric distance >100 kpc, was the first PN discovered in the outer streams of M31. The [O III] 4363 auroral line was well detected in the spectra of all targets, enabling electron temperature determination. Ionic abundances are derived based on the [O III] temperatures, and elemental abundances of helium, nitrogen, oxygen, neon, sulfur, and argon are estimated. The relatively low N/O and He/H ratios as well as abundance ratios of alpha-elements indicate that our target PNe might belong to populations as old as ~2 Gyr. Our PN sample, including the current seven and the previous three observed by Fang et al., have rather homogeneo... 15. The Gaia-ESO Survey: Sodium and aluminium abundances in giants and dwarfs. Implications for stellar and Galactic chemical evolution Smiljanic, R.; Romano, D.; Bragaglia, A.; Donati, P.; Magrini, L.; Friel, E.; Jacobson, H.; Randich, S.; Ventura, P.; Lind, K.; Bergemann, M.; Nordlander, T.; Morel, T.; Pancino, E.; Tautvaišienė, G.; Adibekyan, V.; Tosi, M.; Vallenari, A.; Gilmore, G.; Bensby, T.; François, P.; Koposov, S.; Lanzafame, A. C.; Recio-Blanco, A.; Bayo, A.; Carraro, G.; Casey, A. R.; Costado, M. T.; Franciosini, E.; Heiter, U.; Hill, V.; Hourihane, A.; Jofré, P.; Lardo, C.; de Laverny, P.; Lewis, J.; Monaco, L.; Morbidelli, L.; Sacco, G. G.; Sbordone, L.; Sousa, S. G.; Worley, C. C.; Zaggia, S. 2016-05-01 Context. Stellar evolution models predict that internal mixing should cause some sodium overabundance at the surface of red giants more massive than ~1.5-2.0 M⊙. The surface aluminium abundance should not be affected. Nevertheless, observational results disagree about the presence and/or the degree of Na and Al overabundances. In addition, Galactic chemical evolution models adopting different stellar yields lead to very different predictions for the behavior of [Na/Fe] and [Al/Fe] versus [Fe/H]. Overall, the observed trends of these abundances with metallicity are not well reproduced. Aims: We readdress both issues, using new Na and Al abundances determined within the Gaia-ESO Survey. Our aim is to obtain better observational constraints on the behavior of these elements using two samples: i) more than 600 dwarfs of the solar neighborhood and of open clusters and ii) low- and intermediate-mass clump giants in six open clusters. Methods: Abundances were determined using high-resolution UVES spectra. The individual Na abundances were corrected for nonlocal thermodynamic equilibrium effects. For the Al abundances, the order of magnitude of the corrections was estimated for a few representative cases. For giants, the abundance trends with stellar mass are compared to stellar evolution models. For dwarfs, the abundance trends with metallicity and age are compared to detailed chemical evolution models. Results: Abundances of Na in stars with mass below ~2.0 M⊙, and of Al in stars below ~3.0 M⊙, seem to be unaffected by internal mixing processes. For more massive stars, the Na overabundance increases with stellar mass. This trend agrees well with predictions of stellar evolutionary models. For Al, our only cluster with giants more massive than 3.0 M⊙, NGC 6705, is Al enriched. However, this might be related to the environment where the cluster was formed. Chemical evolution models that well fit the observed [Na/Fe] vs. [Fe/H] trend in solar neighborhood dwarfs 16. Abundance analysis of red clump stars in the old, inner disc, open cluster NGC 4337: a twin of NGC 752? Carraro, Giovanni; Villanova, Sandro 2014-01-01 Open star clusters older than ~ 1 Gyr are rare in the inner Galactic disc. Still, they are objects that hold crucial information for probing the chemical evolution of these regions of the Milky Way. We aim at increasing the number of old open clusters in the inner disc for which high-resolution metal abundances are available. Here we report on NGC 4337, which was recently discovered to be an old, inner disc open cluster. We present the very first high-resolution spectroscopy of seven clump stars that are all cluster members. We performed a detailed abundance analysis for them. We find that NGC 4337 is marginally more metal-rich than the Sun, with [Fe/H]=+0.12$\\pm$0.05. The abundance ratios of$\\alpha$-elements are generally solar. At odds with recent studies on intermediate-age and old open clusters in the Galactic disc, Ba is under-abundant in NGC 4337 compared with the Sun. Our analysis of the iron-peak elements (Cr and Ni) does not reveal anything anomalous. Based on these results, we estimate the cluster ... 17. Accurate and homogeneous abundance patterns in solar-type stars of the solar neighbourhood: a chemo-chronological analysis da Silva, R; Milone, A C; da Silva, L; Ribeiro, L S; Rocha-Pinto, H J 2012-01-01 We report the abundances of C, Na, Mg, Si, Ca, Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, Sr, Y, Zr, Ba, Ce, Nd, and Sm in 25 solar-type stars in the solar neighbourhood, and their correlations with ages, kinematics, and orbital parameters. The spectroscopic analysis, based high resolution and high S/N ratio data, was differential to the Sun and applied to atomic line EWs and to C and C2 spectral synthesis. We performed a statistical study using a tree clustering analysis, searching for groups of stars sharing similar abundance patterns. We derived Teff, log(g), and [Fe/H] with errors of 30 K, 0.13 dex, and 0.05 dex, respectively. The average error in [X/Fe] is 0.06 dex. Ages were derived from theoretical HR diagrams and memberships in kinematical moving groups. We identified four stellar groups: with over-solar abundances ( = +0.26 dex), under-solar abundances ( = -0.24 dex), and intermediate values ( = -0.06 and +0.06 dex) but with distinct chemical patterns. Stars sharing solar metallicity, age, and Galactic o... 18. The C+N+O abundance of Omega Centauri giant stars: implications on the chemical enrichment scenario and the relative ages of different stellar populations Marino, A F; Piotto, G; Cassisi, S; D'Antona, F; Anderson, J; Aparicio, A; Bedin, L R; Renzini, A; Villanova, S 2011-01-01 We present a chemical-composition analysis of 77 red-giant stars in Omega Centauri. We have measured abundances for carbon and nitrogen, and combined our results with abundances of O, Na, La, and Fe that we determined in our previous work. Our aim is to better understand the peculiar chemical-enrichment history of this cluster, by studying how the total C+N+O content varies among the different-metallicity stellar groups, and among stars at different places along the Na-O anticorrelation. We find the (anti)correlations among the light elements that would be expected on theoretical ground for matter that has been nuclearly processed via high-temperature proton captures. The overall [(C+N+O)/Fe] increases by 0.5 dex from [Fe/H] -2.0 to [Fe/H] -0.9. Our results provide insight into the chemical-enrichment history of the cluster, and the measured CNO variations provide important corrections for estimating the relative ages of the different stellar populations. 19. Detailed Chemical Abundances in the r-process-rich Ultra-faint Dwarf Galaxy Reticulum 2 Roederer, Ian U.; Mateo, Mario; Bailey, John I., III; Song, Yingyi; Bell, Eric F.; Crane, Jeffrey D.; Loebman, Sarah; Nidever, David L.; Olszewski, Edward W.; Shectman, Stephen A.; Thompson, Ian B.; Valluri, Monica; Walker, Matthew G. 2016-03-01 The ultra-faint dwarf (UFD) galaxy Reticulum 2 (Ret 2) was recently discovered in images obtained by the Dark Energy Survey. We have observed the four brightest red giants in Ret 2 at high spectral resolution using the Michigan/Magellan Fiber System. We present detailed abundances for as many as 20 elements per star, including 12 elements heavier than the Fe group. We confirm previous detection of high levels of r-process material in Ret 2 (mean [Eu/Fe] = +1.69 ± 0.05) found in three of these stars (mean [Fe/H] = -2.88 ± 0.10). The abundances closely match the r-process pattern found in the well-studied metal-poor halo star CS 22892-052. Such r-process-enhanced stars have not been found in any other UFD galaxy, though their existence has been predicted by at least one model. The fourth star in Ret 2 ([Fe/H] = -3.42 ± 0.20) contains only trace amounts of Sr ([Sr/Fe] = -1.73 ± 0.43) and no detectable heavier elements. One r-process enhanced star is also enhanced in C (natal [C/Fe] ≈ +1.1). This is only the third such star known, which suggests that the nucleosynthesis sites leading to C and r-process enhancements are decoupled. The r-process-deficient star is enhanced in Mg ([Mg/Fe] = +0.81 ± 0.14), and the other three stars show normal levels of α-enhancement (mean [Mg/Fe] = +0.34 ± 0.03). The abundances of other α and Fe-group elements closely resemble those in UFD galaxies and metal-poor halo stars, suggesting that the nucleosynthesis that led to the large r-process enhancements either produced no light elements or produced light-element abundance signatures indistinguishable from normal supernovae. This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. 20. ANALYSIS OF RICIN TOXIN PREPARATIONS FOR CARBOHYDRATE AND FATTY ACID ABUNDANCE AND ISOTOPE RATIO INFORMATION Wunschel, David S.; Kreuzer-Martin, Helen W.; Antolick, Kathryn C.; Colburn, Heather A.; Moran, James J.; Melville, Angela M. 2009-12-01 This report describes method development and preliminary evaluation for analyzing castor samples for signatures of purifying ricin. Ricin purification from the source castor seeds is essentially a problem of protein purification using common biochemical methods. Indications of protein purification will likely manifest themselves as removal of the non-protein fractions of the seed. Two major, non-protein, types of biochemical constituents in the seed are the castor oil and various carbohydrates. The oil comprises roughly half the seed weight while the carbohydrate component comprises roughly half of the remaining “mash” left after oil and hull removal. Different castor oil and carbohydrate components can serve as indicators of specific toxin processing steps. Ricinoleic acid is a relatively unique fatty acid in nature and is the most abundant component of castor oil. The loss of ricinoleic acid indicates a step to remove oil from the seeds. The relative amounts of carbohydrates and carbohydrate-like compounds, including arabinose, xylose, myo-inositol fucose, rhamnose, glucosamine and mannose detected in the sample can also indicate specific processing steps. For instance, the differential loss of arabinose relative to mannose and N-acetyl glucosamine indicates enrichment for the protein fraction of the seed using protein precipitation. The methods developed in this project center on fatty acid and carbohydrate extraction from castor samples followed by derivatization to permit analysis by gas chromatography-mass spectrometry (GC-MS). Method descriptions herein include: the source and preparation of castor materials used for method evaluation, the equipment and description of procedure required for chemical derivatization, and the instrument parameters used in the analysis. Two types of derivatization methods describe analysis of carbohydrates and one procedure for analysis of fatty acids. Two types of GC-MS analysis is included in the method development, one 1. Coulometry in quantitative chemical analysis and physico-chemical research Electroanalytical methods such as potentiometry, amperometry, coulometry and voltammetry are well established and routinely employed in quantitative chemical analysis as well as in chemical research. Coulometry is one of the most important electroanalytical techniques, which involves change in oxidation state of electro active species by heterogeneous electron transfer. In primary coulometric method, uranium is determined at mercury pool electrode and plutonium at platinum gauze electrode 2. Northern blot analysis to investigate the abundance of microorganisms Modern molecular microbial ecology has its origins in the analysis of informative macromolecules. Zuckerkandl and Pauling proposed that certain macromolecules are relatively free from evolutionary pressure and may be considered a molecular document of the evolutionary history of the organism that carries the molecule. In their paper, they proposed that the sequence difference of a molecule is proportional to the evolutionary distance between the organisms; the greater the sequence differences the greater the evolutionary distance. A significant breakthrough with this approach in microbial systematics resulted from the work of Woese and Fox who used oligonucleotide cataloguing of 16S-rRNA to delineate the phylogenetic relationships between microorganisms. By using this approach, it was possible to demonstrate that all life on earth could be divided into three kingdoms: eukarya, procarya and archaea. The unique findings of this research was that the archaea, made up of many methanogenic and thermophilic microorganisms, were probably the most ancient life forms on earth and were not bacteria at all. One of the first applications of rRNA genes was the recovery of unique 5S-rRNA sequences from the Yellowstone hot spring. Even though the statistical utility of the short 5S sequences was limited, it demonstrated that there was a great deal of uncultured diversity within the ecosystem. This uncultured diversity was demonstrated to be highly significant when clone libraries were constructed from the Yellowstone hot spring. Universal PCR primers were used to amplify 16S-rDNA from the microbial community, and these mixed amplicons were cloned into a vector. Each insert, potentially representing a different species, was sequenced giving a snapshot of microbial diversity in the sample. A unique feature of the rRNAs is that they are hierarchical molecules. This means that there are regions where the molecules is highly conserved, others where the sequence is variable, and even 3. Chemical Abundances and Rotation Velocities of Blue Horizontal-Branch Stars in Six Globular Clusters Behr, B B 2003-01-01 High-resolution spectroscopic measurements of blue horizontal-branch stars in six metal-poor globular clusters -- M3, M13, M15, M68, M92, and NGC 288 -- reveal remarkable variations in photospheric composition and rotation velocity as a function of a star's position along the horizontal branch. For the cooler stars (Teff < 11200 K), the derived abundances are in good agreement with the canonical cluster metallicities, and we find a wide range of v sin i rotation velocities, some as high as 40 km/s. In the hotter stars, however, most metal species are strongly enhanced, by as much as 3 dex, relative to the expected cluster metallicity, while helium is depleted by 2 dex or more. In addition, the hot stars all rotate slowly, with v sin i < 8 km/s. The anomalous abundances appear to be due to atomic diffusion mechanisms -- gravitational settling of helium, and radiative levitation of metals -- in the non-convective atmospheres of these hot stars. We discuss the influence of these photospheric metal enhancem... 4. Solar Chemical Abundances Determined with a CO5BOLD 3D Model Atmosphere Caffau, Elisabetta; Steffen, Matthias; Freytag, Bernd; Bonifacio, Piercarlo 2010-01-01 In the last decade, the photospheric solar metallicity as determined from spectroscopy experienced a remarkable downward revision. Part of this effect can be attributed to an improvement of atomic data and the inclusion of NLTE computations, but also the use of hydrodynamical model atmospheres seemed to play a role. This "decrease" with time of the metallicity of the solar photosphere increased the disagreement with the results from helioseismology. With a CO5BOLD 3D model of the solar atmosphere, the CIFIST team at the Paris Observatory re-determined the photospheric solar abundances of several elements, among them C, N, and O. The spectroscopic abundances are obtained by fitting the equivalent width and/or the profile of observed spectral lines with synthetic spectra computed from the 3D model atmosphere. We conclude that the effects of granular fluctuations depend on the characteristics of the individual lines, but are found to be relevant only in a few particular cases. 3D effects are not reponsible for t... 5. Detailed Chemical Abundances in the r-Process-Rich Ultra-Faint Dwarf Galaxy Reticulum 2 Roederer, Ian U; Bailey, John I; Song, Yingyi; Bell, Eric F; Crane, Jeffrey D; Loebman, Sarah; Nidever, David L; Olszewski, Edward W; Shectman, Stephen A; Thompson, Ian B; Valluri, Monica; Walker, Matthew G 2016-01-01 The ultra-faint dwarf galaxy Reticulum 2 (Ret 2) was recently discovered in images obtained by the Dark Energy Survey. We have observed the four brightest red giants in Ret 2 at high spectral resolution using the Michigan/Magellan Fiber System. We present detailed abundances for as many as 20 elements per star, including 12 elements heavier than the Fe group. We confirm previous detection of high levels of r-process material in Ret 2 (mean [Eu/Fe]=+1.69+/-0.05) found in three of these stars (mean [Fe/H]=-2.88+/-0.10). The abundances closely match the r-process pattern found in the well-studied metal-poor halo star CS22892-052. Such r-process-enhanced stars have not been found in any other ultra-faint dwarf galaxy, though their existence has been predicted by at least one model. The fourth star in Ret 2 ([Fe/H]=-3.42+/-0.20) contains only trace amounts of Sr ([Sr/Fe]=-1.73+/-0.43) and no detectable heavier elements. One r-process enhanced star is also enhanced in C (natal [C/Fe]=+1.1). This is only the third s... 6. Chemical Abundances in NGC 5053: A Very Metal-Poor and Dynamically Complex Globular Cluster Boberg, Owen M; Vesperini, Enrico 2015-01-01 NGC 5053 provides a rich environment to test our understanding of the complex evolution of globular clusters (GCs). Recent studies have found that this cluster has interesting morphological features beyond the typical spherical distribution of GCs, suggesting that external tidal effects have played an important role in its evolution and current properties. Additionally, simulations have shown that NGC 5053 could be a likely candidate to belong to the Sagittarius dwarf galaxy (Sgr dSph) stream. Using the Wisconsin-Indiana-Yale-NOAO-Hydra multi-object spectrograph, we have collected high quality (signal-to-noise ratio$\\sim$75-90), medium-resolution spectra for red giant branch stars in NGC 5053. Using these spectra we have measured the Fe, Ca, Ti, Ni, Ba, Na, and O abundances in the cluster. We measure an average cluster [Fe/H] abundance of -2.45 with a standard deviation of 0.04 dex, making NGC 5053 one of the most metal-poor GCs in the Milky Way (MW). The [Ca/Fe], [Ti/Fe], and [Ba/Fe] we measure are consist... 7. Red horizontal branch stars in the Galactic field: A chemical abundance survey Fo B.-Q. 2013-03-01 Full Text Available A large sample survey of Galactic red horizontal-branch (RHB stars was conducted to investigate their atmospheric parameters and elemental abundances. High-resolution spectra of 76 Galactic field stars were obtained with the 2.7 m Smith Telescope at McDonald Observatory. Only the color and the parallax were considered during the selection of the field stars. Equivalent width or synthetic spectrum analyses were used in order to determine the relative abundances of the following elements: proton-capture elements C, N, O and Li, alpha-elements Ca and Si, and neutron-capture elements Eu and La. Additionally, 12C/13C isotopic ratios were derived by using the CN features mainly located in the 7995 − 8040 Å spectral region. The evaluation of effective temperatures, surface gravities and 12C/13C isotopic ratios together with evolutionary stages of the candidates revealed that 18 out of 76 stars in our sample are probable RHBs. Including both kinematic and evolutionary status information, we conclude that we have five thick disk and 13 thin disk RHB stars in our sample. Although RHB stars have been regarded as thick disk members of the Galaxy, the low-velocity RHBs with a solar metallicity in our sample suggests the existence of a large number of thin disk RHBs, which cannot be easily explained by standard stellar evolutionary models. 8. Chemical element abundances in the outer halo globular cluster M 75 Kacharov, Nikolay 2013-01-01 We present the first comprehensive abundance study of the massive, outer halo globular cluster (GC) M 75 (NGC 6864). This unique system shows a very extended trimodal horizontal branch (HB), but no other clues for multiple populations have been detected in its colour-magnitude diagram (CMD). Based on high-resolution spectroscopic observations of 16 red giant stars, we derived the abundances of a large variety of alpha, p-capture, iron-peak, and n-capture elements. We found that the cluster is metal-rich ([Fe/H] = -1.16 +/- 0.02 dex, [alpha/Fe] = +0.30 +/- 0.02 dex), and shows a marginal spread in [Fe/H] of 0.07 dex, typical of most GCs of similar luminosity. We detected significant variations of O, Na, and Al among our sample, suggesting three different populations. Additionally, the two most Na-rich stars are also significantly Ba-enhanced, indicating a fourth population of stars. Curiously, most stars in M 75 (excluding the two Ba-rich stars) show a predominant r-process enrichment pattern, which is unusual... 9. Chemical Abundances in Broad Emission Line Regions The "Nitrogen-Loud" QSO 0353-383 Baldwin, J A; Korista, K T; Ferland, G J; Dietrich, M; Warner, C 2003-01-01 The intensity of the strong N V 1240 line relative to C IV 1549 or to He II 1640 has been proposed as an indicator of the metallicity of QSO broad emission line regions, allowing abundance measurements in a large number of QSOs out to the highest redshifts. Previously, it had been shown that the (normally) much weaker lines N III] 1750 and N IV] 1486 could be used in the same way. The redshift 1.96 QSO 0353-383 has long been known to have N III] and N IV] lines that are far stronger relative to Ly-alpha or C IV than in any other QSO. Because in this particular case these intercombination lines can be easily measured, this unusual object provides an ideal opportunity for testing whether the N V line is a valid abundance indicator. Using new observations of Q0353-383 made both with HST in the ultraviolet and from the ground in the visible passband, we find that intensity ratios involving the strengths of N V, N IV] and N III] relative to lines of He, C and O all indicate that nitrogen is overabundant relative t... 10. A landscape analysis of cougar distribution and abundance in Montana, USA. Riley, S J; Malecki, R A 2001-09-01 Recent growth in the distribution and abundance of cougars (Puma concolor) throughout western North America has created opportunities, challenges, and problems for wildlife managers and raises questions about what factors affect cougar populations. We present an analysis of factors thought to affect cougar distribution and abundance across the broad geographical scales on which most population management decisions are made. Our objectives were to: (1) identify and evaluate landscape parameters that can be used to predict the capability of habitats to support cougars, and (2) evaluate factors that may account for the recent expansion in cougar numbers. Habitat values based on terrain ruggedness and forested cover explained 73% of the variation in a cougar abundance index. Indices of cougar abundance also were spatially and temporally correlated with ungulate abundance. An increase in the number and total biomass of ungulate prey species is hypothesized to account for recent increases in cougars. Cougar populations in Montana are coping with land development by humans when other components of habitat and prey populations are sufficient. Our analysis provides a better understanding of what may have influenced recent growth in cougar distribution and abundance in Montana and, when combined with insights about stakeholder acceptance capacity, offers a basis for cougar management at broad scales. Long-term conservation of cougars necessitates a better understanding of ecosystem functions that affect prey distribution and abundance, more accurate estimates of cougar populations, and management abilities to integrate these components with human values. PMID:11531235 11. Origin of central abundances in the hot intra-cluster medium - II. Chemical enrichment and supernova yield models Mernier, François; Pinto, Ciro; Kaastra, Jelle S; Kosec, Peter; Zhang, Yu-Ying; Mao, Junjie; Werner, Norbert; Pols, Onno R; Vink, Jacco 2016-01-01 The hot intra-cluster medium (ICM) is rich in metals, which are synthesised by supernovae (SNe) and accumulate over time into the deep gravitational potential well of clusters of galaxies. Since most of the elements visible in X-rays are formed by type Ia (SNIa) and/or core-collapse (SNcc) supernovae, measuring their abundances gives us direct information on the nucleosynthesis products of billions of SNe since the epoch of the star formation peak (z~2-3). In this study, we compare the most accurate average X/Fe abundance ratios (compiled in a previous work from XMM-Newton EPIC and RGS observations of 44 galaxy clusters, groups, and ellipticals), representative of the chemical enrichment in the nearby ICM, to various SNIa and SNcc nucleosynthesis models found in the literature. The use of a SNcc model combined to any favoured standard SNIa model (deflagration or delayed-detonation) fails to reproduce our abundance pattern. In particular, the Ca/Fe and Ni/Fe ratios are significantly underestimated by the model... 12. Chemical Abundances in Twelve Red Giants of the Large Magellanic Cloud from High-Resolution Infrared Spectroscopy Smith, V V; Cunha, K; Plez, B; Lambert, D L; Pilachowski, C A; Barbuy, B; Melendez, J; Balachandran, S C; Bessell, M S; Geisler, D; Hesser, J E; Winge, C 2002-01-01 High-resolution infrared spectra (R=50,000) have been obtained for twelve red-giant members of the LMC with the Gemini South 8.3-meter telescope plus Phoenix spectrometer. Quantitative chemical abundances of carbon-12, carbon-13, nitrogen-14, and oxygen-16 were derived from molecular lines of CO, CN, and OH, while sodium, scandium, titanium, and iron abundances were derived from neutral atomic lines. The LMC giants have masses from about 1 to 4 solar masses and span a metallicity range from [Fe/H]= -1.1 to -0.3. The program red giants all show evidence of first dredge-up mixing, with low 12C/13C ratios, and low 12C correlated with high 14N abundances. Comparisons of the oxygen-to-iron ratios in the LMC and the Galaxy indicate that the trend of [O/Fe] versus [Fe/H] in the LMC falls about 0.2 dex below the Galactic trend. Such an offset can be modeled as due to an overall lower rate of supernovae per unit mass in the LMC relative to the Galaxy, as well as a slightly lower ratio of supernovae of type II to super... 13. Galactic Chemical Evolution and solar s-process abundances: dependence on the 13C-pocket structure Bisterzo, S; Gallino, R; Wiescher, M; Käppeler, F 2014-01-01 We study the s-process abundances (A > 90) at the epoch of the solar-system formation. AGB yields are computed with an updated neutron capture network and updated initial solar abundances. We confirm our previous results obtained with a Galactic Chemical Evolution (GCE) model: (i) as suggested by the s-process spread observed in disk stars and in presolar meteoritic SiC grains, a weighted average of s-process strengths is needed to reproduce the solar s-distribution of isotopes with A > 130; (ii) an additional contribution (of about 25%) is required in order to represent the solar s-process abundances of isotopes from A = 90 to 130. Furthermore, we investigate the effect of different internal structures of the 13C-pocket, which may affect the efficiency of the 13C(a, n)16O reaction, the major neutron source of the s-process. First, keeping the same 13C profile adopted so far, we modify by a factor of two the mass involved in the pocket; second, we assume a flat 13C profile in the pocket, and we test again the... 14. Element Abundances in a Gas-rich Galaxy at z = 5: Clues to the Early Chemical Enrichment of Galaxies Morrison, Sean; Som, Debopam; DeMarcy, Bryan; Quiret, Samuel; Peroux, Celine 2016-01-01 Element abundances in high-redshift quasar absorbers offer excellent probes of the chemical enrichment of distant galaxies, and can constrain models for population III and early population II stars. Recent observations indicate that the sub-damped Lyman-alpha (sub-DLA) absorbers are more metal-rich than the damped Lyman-alpha (DLA) absorbers at redshifts 0$$4.7. However, only 3 DLAs at z$$>$4.5 and no sub-DLAs at $z$$>3.5 have "dust-free" metallicity measurements of undepleted elements. We report the first measurement of element abundances in a sub-DLA at z=5.0, using Keck HIRES and ESI data. We obtain fairly robust abundances of C, O, Si, and Fe, using lines outside the Lyman-alpha forest. We find this absorber to be metal-poor, with [O/H]=-2.02$$\\pm$0.12, which is $>$5$\\sigma$ below the level expected from an extrapolation of the trend for $z$$<$3.5 sub-DLAs. The C/O ratio is $1.7^{+0.4}_{-0.3}$ times lower than in the Sun. More strikingly, Si/O is $3.0^{+0.6}_{-0.5}$ times lower than in the Sun, wh... 15. Spectroscopic Survey of {\\gamma} Doradus Stars I. Comprehensive atmospheric parameters and abundance analysis of {\\gamma} Doradus stars Kahraman-Alicavus, F; De Cat, P; Soydugan, E; Kolaczkowski, Z; Ostrowski, J; Telting, J H; Uytterhoeven, K; Poretti, E; Rainer, M; Suarez, J C; Mantegazza, L; Kilmartin, P; Pollard, K R 2016-01-01 We present a spectroscopic survey of {\\gamma} Doradus stars. The high-resolution, high signal-to-noise spectra of fifty-two objects were collected by five different spectrographs. The spectral classification, atmospheric parameters ($T_{\\rm eff}$ , $\\log g$, {\\xi}), vsini and chemical composition of the stars were derived. The spectral and luminosity classes of the stars were found between A7 - G0 and IV - V, respectively. The initial values for $T_{\\rm eff}$ and $\\log g$ were determined from photometric indices and spectral energy distribution. Those parameters were improved by the analysis of hydrogen lines. The final values of $T_{\\rm eff}$, $\\log g$ and {\\xi} were derived from the iron lines analysis. For the whole sample, $T_{\\rm eff}$ values were found between 6000 K and 7900 K, while logg values range from 3.8 to 4.5 dex. Chemical abundances and v sin i values were derived by the spectrum synthesis method. The $v \\sin i$ values were found between 5 and 240 km s$^{-1}$. The chemical abundance pattern of... 16. Chemical Abundances in the PN Wray16-423 in the Sagittarius Dwarf Spheroidal Galaxy: Constraining the Dust Composition Otsuka, Masaaki 2015-01-01 We performed a detailed analysis of elemental abundances, dust features, and polycyclic aromatic hydrocarbons (PAHs) in the C-rich planetary nebula (PN) Wray16-423 in the Sagittarius dwarf spheroidal galaxy, based on a unique dataset taken from the Subaru/HDS, MPG/ESO FEROS, HST/WFPC2, and Spitzer/IRS. We performed the first measurements of Kr, Fe, and recombination O abundance in this PN. The extremely small [Fe/H] implies that most Fe atoms are in the solid phase, considering into account the abundance of [Ar/H]. The Spitzer/IRS spectrum displays broad 16-24 um and 30 um features, as well as PAH bands at 6-9 um and 10-14 um. The unidentified broad 16-24 um feature may not be related to iron sulfide (FeS), amorphous silicate, or PAHs. Using the spectral energy distribution model, we derived the luminosity and effective temperature of the central star, and the gas and dust masses. The observed elemental abundances and derived gas mass are in good agreement with asymptotic giant branch nucleosynthesis models f... 17. A chemical solver to compute molecule and grain abundances and non-ideal MHD resistivities in prestellar core collapse calculations Marchand, Pierre; Chabrier, Gilles; Hennebelle, Patrick; Commerçon, Benoit; Vaytet, Neil 2016-01-01 We develop a detailed chemical network relevant to the conditions characteristic of prestellar core collapse. We solve the system of time-dependent differential equations to calculate the equilibrium abundances of molecules and dust grains, with a size distribution given by size-bins for these latter. These abundances are used to compute the different non-ideal magneto-hydrodynamics resistivities (ambipolar, Ohmic and Hall), needed to carry out simulations of protostellar collapse. For the first time in this context, we take into account the evaporation of the grains, the thermal ionisation of Potassium, Sodium and Hydrogen at high temperature, and the thermionic emission of grains in the chemical network, and we explore the impact of various cosmic ray ionisation rates. All these processes significantly affect the non-ideal magneto-hydrodynamics resistivities, which will modify the dynamics of the collapse. Ambipolar diffusion and Hall effect dominate at low densities, up to n_H = 10^12 cm^-3, after which Oh... 18. The Advanced Spectral Library (ASTRAL): Abundance Analysis of the CP Star HR 465 Carpenter, Kenneth G.; Nielsen, Krister E.; Kober, Gladys V. 2016-01-01 We present the results of a spectrum analysis of the prototypical A-type magnetic CP star HR465. Synthetic spectra, using an non-LTE atmosphere model, were generated to fit high-resolution ultraviolet spectra (1200-3100 A) obtained as a part of the "Advanced Spectral Library (ASTRAL) Project: Hot Stars" program (GO-13346: Ayres PI). The ultraviolet data were supplemented by high resolution optical data recorded at the Nordic Optical Telescope with the SOFIN spectrograph. The optical data was used as a complement to the high line density ultraviolet spectrum and primarily used to derive accurate iron-group element abundances.HR 465 has previously been analyzed using IUE spectra. We revisit the object with this high quality data. Large parts of the spectrum have been synthesized with an ATLAS model (Teff=10750K, logg=4.0) and we present abundance results for more than 50 elements. We can confirm some of the abundance characteristics previously derived from IUE data, where elements heavier than Z=30 show significant abundance enhancements compared to solar values, while some of the lighter elements show abundance deficiencies. We will place these results in context of other AP stars, and the large number of element abundances will also help us to put some constraint on stellar abundance and evolution theories. 19. Computational and statistical analyses of amino acid usage and physico-chemical properties of the twelve late embryogenesis abundant protein classes. Emmanuel Jaspard Full Text Available Late Embryogenesis Abundant Proteins (LEAPs are ubiquitous proteins expected to play major roles in desiccation tolerance. Little is known about their structure - function relationships because of the scarcity of 3-D structures for LEAPs. The previous building of LEAPdb, a database dedicated to LEAPs from plants and other organisms, led to the classification of 710 LEAPs into 12 non-overlapping classes with distinct properties. Using this resource, numerous physico-chemical properties of LEAPs and amino acid usage by LEAPs have been computed and statistically analyzed, revealing distinctive features for each class. This unprecedented analysis allowed a rigorous characterization of the 12 LEAP classes, which differed also in multiple structural and physico-chemical features. Although most LEAPs can be predicted as intrinsically disordered proteins, the analysis indicates that LEAP class 7 (PF03168 and probably LEAP class 11 (PF04927 are natively folded proteins. This study thus provides a detailed description of the structural properties of this protein family opening the path toward further LEAP structure - function analysis. Finally, since each LEAP class can be clearly characterized by a unique set of physico-chemical properties, this will allow development of software to predict proteins as LEAPs. 20. Derivation of chemical abundances in star-forming galaxies at intermediate redshift Perez-Martinez, J M 2014-01-01 We have studied a sample of 11 blue, luminous, metal-poor galaxies at redshift 0.744 < z < 0.835 from the DEEP2 redshift survey. They were selected by the presence of the [OIII]4363 auroral line and the [OII]3726,3729 doublet together with the strong emission nebular [OIII] lines in their spectra from a sample of around 6000 galaxies within a narrow redshift range. All the spectra have been taken with DEIMOS, which is a multi-slit, double-beam spectrograph which uses slitmasks to allow the spectra from many objects to be imaged at the same time. The selected objects present high luminosities (20.3 < MB < 18.5), remarkable blue color index, and total oxygen abundances between 7.69 and 8.15 which represent 1/3 to 1/10 of the solar value. The wide spectral coverage (from 6500 to 9100 angstroms) of the DEIMOS spectrograph and its high spectral resolution, R around 5000, bring us an opportunity to study the behaviour of these star-forming galaxies at intermediate redshift with high quality spectra. We ... 1. Spectroscopic survey of γ Doradus stars - I. Comprehensive atmospheric parameters and abundance analysis of γ Doradus stars Kahraman Aliçavuş, F.; Niemczura, E.; De Cat, P.; Soydugan, E.; Kołaczkowski, Z.; Ostrowski, J.; Telting, J. H.; Uytterhoeven, K.; Poretti, E.; Rainer, M.; Suárez, J. C.; Mantegazza, L.; Kilmartin, P.; Pollard, K. R. 2016-05-01 We present a spectroscopic survey of known and candidate γ Doradus stars. The high-resolution, high signal-to-noise spectra of 52 objects were collected by five different spectrographs. The spectral classification, atmospheric parameters (Teff, log g, ξ), vsin i and chemical composition of the stars were derived. The stellar spectral and luminosity classes were found between G0-A7 and IV-V, respectively. The initial values for Teff and log g were determined from the photometric indices and spectral energy distribution. Those parameters were improved by the analysis of hydrogen lines. The final values of Teff, log g and ξ were derived from the iron lines analysis. The Teff values were found between 6000 K and 7900 K, while log g values range from 3.8 to 4.5 dex. Chemical abundances and vsin i values were derived by the spectrum synthesis method. The vsin i values were found between 5 and 240 km s-1. The chemical abundance pattern of γ Doradus stars were compared with the pattern of non-pulsating stars. It turned out that there is no significant difference in abundance patterns between these two groups. Additionally, the relations between the atmospheric parameters and the pulsation quantities were checked. A strong correlation between the vsin i and the pulsation periods of γ Doradus variables was obtained. The accurate positions of the analysed stars in the Hertzsprung-Russell diagram have been shown. Most of our objects are located inside or close to the blue edge of the theoretical instability strip of γ Doradus. 2. Infrared Spectra and Chemical Abundance of Methyl Propionate in Icy Astrochemical Conditions Sivaraman, B; Das, A; Gopakumar, G; Majumdar, L; Chakrabarti, S K; Subramanian, K P; Sekhar, B N Raja; Hada, M 2014-01-01 We carried out an experiment in order to obtain the InfraRed (IR) spectra of methyl propionate (CH3CH2COOCH3) in astrochemical conditions and present the IR spectra for future identification of this molecule in the InterStellar Medium (ISM). The experimental IR spectrum is compared with the theoretical spectrum and an attempt was made to assign the observed peak positions to their corresponding molecular vibrations in condensed phase. Moreover, our calculations suggest that methyl propionate must be synthesized efficiently within the complex chemical network of the ISM and therefore be present in cold dust grains, awaiting identification. 3. The metallicity gradient of M 33: chemical abundances of HII regions Magrini, L.; Vilchez, J. M.; A. Mampaso; Corradi, R. L. M.; Leisy, P. 2007-01-01 We present spectroscopic observations of a sample of 72 emission-line objects, including mainly HII regions, in the spiral galaxy M 33. Spectra were obtained with the multi-object, wide field spectrograph AF2/WYFFOS at the 4.2m WHT telescope. Line intensities, extinction, and electron density were determined for the whole sample of objects. The aim of the present work was to derive chemical and physical parameters of a set of HII regions, and from them the metallicity gradient. Electron tempe... 4. Chemical analysis of ancient relicts in the Milky Way disk Tautvaišienė G. 2012-02-01 Full Text Available We present detailed analysis of two groups of F- and G- type stars originally found to have similarities in their orbital parameters. The distinct kinematic properties suggest that they might originate from ancient accretion events in the Milky Way. From high resolution spectra taken with the spectrograph FIES at the Nordic Optical Telescope, La Palma, we determined abundances of oxygen, alpha- and r-process elements. Our results indicate that the sample of investigated stars is chemically homogeneous and the abundances of oxygen, alpha and r-process elements are overabundant in comparison with Galactic disk dwarfs. This provides the additional evidence that those stellar groups had the common formation and possible origin from disrupted satellites. 5. Spectroscopic abundance analysis of dwarfs in young open cluster IC 4665 Shen, Z X; Lin, D N C; Liu, X W; Li, S L 2005-01-01 We report a detailed spectroscopic abundance analysis for a sample of 18 F-K dwarfs of the young open cluster IC 4665. Stellar parameters and element abundances of Li, O, Mg, Si, Ca, Ti, Cr, Fe and Ni have been derived using the spectroscopic synthesis tool SME (Spectroscopy Made Easy). Within the measurement uncertainties the iron abundance is uniform with a standard deviation of 0.04 dex. No correlation is found between the iron abundance and the mass of the stellar convective zone, and between the Li abundance and the Fe abundance. In other words, our results do not reveal any signature of accretion and therefore do not support the scenario that stars with planets (SWPs) acquire their on the average higher metallicity compared to field stars via accretion of metal-rich planetary material. Instead the higher metallicity of SWPs may simply reflect the fact that planet formation is more efficient in high metallicity environs. However, since that many details of the planet system formation processes remain poo... 6. CHEMICAL ABUNDANCES IN THE EXTERNALLY POLLUTED WHITE DWARF GD 40: EVIDENCE OF A ROCKY EXTRASOLAR MINOR PLANET We present Keck/High Resolution Echelle Spectrometer data with model atmosphere analysis of the helium-dominated polluted white dwarf GD 40, in which we measure atmospheric abundances relative to helium of nine elements: H, O, Mg, Si, Ca, Ti, Cr, Mn, and Fe. Apart from hydrogen, whose association with the other contaminants is uncertain, this material most likely accreted from GD 40's circumstellar dust disk whose existence is demonstrated by excess infrared emission. The data are best explained by accretion of rocky planetary material, in which heavy elements are largely contained within oxides, derived from a tidally disrupted minor planet at least the mass of Juno, and probably as massive as Vesta. The relatively low hydrogen abundance sets an upper limit of 10% water by mass in the inferred parent body, and the relatively high abundances of refractory elements, Ca and Ti, may indicate high-temperature processing. While the overall constitution of the parent body is similar to the bulk Earth being over 85% by mass composed of oxygen, magnesium, silicon, and iron, we find n(Si)/n(Mg) = 0.30 ± 0.11, significantly smaller than the ratio near unity for the bulk Earth, chondrites, the Sun, and nearby stars. This result suggests that differentiation occurred within the parent body. 7. Chemical Abundances in the Externally Polluted White Dwarf GD 40: Evidence of a Rocky Extrasolar Minor Planet Klein, B; Koester, D; Zuckerman, B; Melis, C 2009-01-01 We present Keck/HIRES data with model atmosphere analysis of the helium-dominated polluted white dwarf GD 40, in which we measure atmospheric abundances relative to helium of 9 elements: H, O, Mg, Si, Ca, Ti, Cr, Mn, and Fe. Apart from hydrogen whose association with the other contaminants is uncertain, this material most likely accreted from GD 40's circumstellar dust disk whose existence is demonstrated by excess infrared emission. The data are best explained by accretion of rocky planetary material, in which heavy elements are largely contained within oxides, derived from a tidally disrupted minor planet at least the mass of Juno, and probably as massive as Vesta. The relatively low hydrogen abundance sets an upper limit of 10% water by mass in the inferred parent body, and the relatively high abundances of refractory elements, Ca and Ti, may indicate high-temperature processing. While the overall constitution of the parent body is similar to the bulk Earth being over 85% by mass composed of oxygen, magnes... 8. Solving the excitation and chemical abundances in shocks: the case of HH1 Giannini, T; Nisini, B; Bacciotti, F; Podio, L 2015-01-01 We present deep spectroscopic (3600 - 24700 A) X-shooter observations of the bright Herbig-Haro object HH1, one of the best laboratories to study the chemical and physical modifications caused by protostellar shocks on the natal cloud. We observe atomic fine structure lines, HI, and He, recombination lines and H_2, ro-vibrational lines (more than 500 detections in total). Line emission was analyzed by means of Non Local Thermal Equilibiurm codes to derive the electron temperature and density, and, for the first time, we are able to accurately probe different physical regimes behind a dissociative shock. We find a temperature stratification in the range 4000 - 80000 K, and a significant correlation between temperature and ionization energy. Two density regimes are identified for the ionized gas, a more tenuous, spatially broad component (density about 10^3 cm^-3), and a more compact component (density > 10^5 cm^-3) likely associated with the hottest gas. A further neutral component is also evidenced, having te... 9. Oxygen Gas Abundances at z~1.4: Implications for the Chemical Evolution History of Galaxies Maier, C; Carollo, C M; Meisenheimer, K; Hippelein, H; Stockton, A 2006-01-01 The 1chemical evolution models are used to relate t... 10. Nitrogen and Oxygen Abundance Variations in the Outer Ejecta of Eta Carinae: Evidence for Recent Chemical Enrichment Smith, N; Smith, Nathan; Morse, Jon A. 2004-01-01 We present optical spectra of the ionized Outer Ejecta' of Eta Carinae that reveal differences in chemical composition at various positions. In particular, young condensations just outside the dusty Homunculus Nebula show strong nitrogen lines and little or no oxygen -- but farther away, nitrogen lines weaken and oxygen lines become stronger. The observed variations in the apparent N/O ratio may signify either that the various blobs were ejected with different abundances, or more likely, that the more distant condensations are interacting with normal-composition material. The second hypothesis is supported by various other clues involving kinematics and X-ray emission, and would suggest that Eta Car is enveloped in a cocoon'' deposited by previous stellar-wind mass loss. In particular, all emission features where we detect strong oxygen lines are coincident with or outside the soft X-ray shell. In either case, the observed abundance variations suggest that Eta Car's ejection of nitrogen-rich material is a ... 11. The Chemical Composition Contrast between M3 and M13 Revisited: New Abundances for 28 Giant Stars in M3 Sneden, Christopher; Kraft, Robert P.; Guhathakurta, Puragra; Peterson, Ruth C.; Fulbright, Jon P. 2004-04-01 We report new chemical abundances of 23 bright red giant members of the globular cluster M3, based on high-resolution (R~45,000) spectra obtained with the Keck I telescope. The observations, which involve the use of multislits in the HIRES Keck I spectrograph, are described in detail. Combining these data with a previously reported small sample of M3 giants obtained with the Lick 3 m telescope, we compare metallicities and [X/Fe] ratios for 28 M3 giants with a 35-star sample in the similar-metallicity cluster M13, and with Galactic halo field stars having [Fe/H]=A(Si), we derive little difference in [X/Fe] ratios in the M3, M13, or halo field samples. All three groups exhibit C depletion with advancing evolutionary state beginning at the level of the red giant branch bump,'' but the overall depletion of about 0.7-0.9 dex seen in the clusters is larger than that associated with the field stars. The behaviors of O, Na, Mg, and Al are distinctively different among the three stellar samples. Field halo giants and subdwarfs have a positive correlation of Na with Mg, as predicted from explosive or hydrostatic carbon burning in Type II supernova sites. Both M3 and M13 show evidence of high-temperature proton-capture synthesis from the ON, NeNa, and MgAl cycles, while there is no evidence for such synthesis among halo field stars. But the degree of such extreme proton-capture synthesis in M3 is smaller than it is in M13: the M3 giants exhibit only modest deficiencies of O and corresponding enhancements of Na, less extreme overabundances of Al, fewer stars with low Mg and correspondingly high Na, and no indication that O depletions are a function of advancing evolutionary state, as has been claimed for M13. We have also considered NGC 6752, for which Mg isotopic abundances have been reported by Yong et al. Giants in NGC 6752 and M13 satisfy the same anticorrelation of O abundances with the ratio (25Mg+26Mg)/24Mg, which measures the relative contribution of rare to 12. Chemical abundances in the multiple sub-giant branch of 47 Tucanae: insights on its faint sub-giant branch component Marino, A F; Casagrande, L; Collet, R; Dotter, A; Johnson, C I; Lind, K; Bedin, L R; Jerjen, H; Aparicio, A; Sbordone, L 2016-01-01 The globular cluster 47 Tuc exhibits a complex sub-giant branch (SGB) with a faint-SGB comprising only about the 10% of the cluster mass and a bright-SGB hosting at least two distinct populations.We present a spectroscopic analysis of 62 SGB stars including 21 faint-SGB stars. We thus provide the first chemical analysis of the intriguing faint-SGB population and compare its abundances with those of the dominant populations. We have inferred abundances of Fe, representative light elements C, N, Na, and Al, {\\alpha} elements Mg and Si for individual stars. Oxygen has been obtained by co-adding spectra of stars on different sequences. In addition, we have analysed 12 stars along the two main RGBs of 47 Tuc. Our principal results are: (i) star-to-star variations in C/N/Na among RGB and bright-SGB stars; (ii) substantial N and Na enhancements for the minor population corresponding to the faint-SGB; (iii) no high enrichment in C+N+O for faint-SGB stars. Specifically, the C+N+O of the faint-SGB is a factor of 1.1 hi... 13. Abundance analysis of a sample of evolved stars in the outskirts of Omega Centauri Villanova, Sandro; Scarpa, Riccardo; Marconi, Gianni 2009-01-01 The globular cluster $\\omega$ Centauri (NGC 5139) is a puzzling stellar system harboring several distinct stellar populations whose origin still represents a unique astrophysical challenge. Current scenarios range from primordial chemical inhomogeneities in the mother cloud to merging of different sub-units and/or subsequent generations of enriched stars - with a variety of different pollution sources- within the same potential well. In this paper we study the chemical abundance pattern in the outskirts of Omega Centauri, half-way to the tidal radius (covering the range of 20-30 arcmin from the cluster center), and compare it with chemical trends in the inner cluster regions, in an attempt to explore whether the same population mix and chemical compositions trends routinely found in the more central regions is also present in the cluster periphery.We extract abundances of many elements from FLAMES/UVES spectra of 48 RGB stars using the equivalent width method and then analyze the metallicity distribution func... 14. A Comparison of the Detailed Chemical Abundances of Globular Clusters in the Milky Way, Andromeda, and Centaurus A Galaxies Colucci, Janet E.; Bernstein, Rebecca 2016-01-01 We present a homogeneous analysis of high resolution spectra of globular clusters in three massive galaxies: the Milky Way, M31, and NGC 5128. We measure detailed abundance ratios for alpha, light, Fe-peak, and neutron capture elements using our technique for analyzing the integrated light spectra of globular clusters. For many of the heavy elements we provide a first look at the detailed chemistry of old populations in an early type galaxy. We discuss similarities and differences between the galaxies and the potential implications for their star formation histories. 15. Chemical abundances in the multiple sub-giant branch of 47 Tucanae: insights on its faint sub-giant branch component Marino, A. F.; Milone, A. P.; Casagrande, L.; Collet, R.; Dotter, A.; Johnson, C. I.; Lind, K.; Bedin, L. R.; Jerjen, H.; Aparicio, A.; Sbordone, L. 2016-06-01 The globular cluster 47 Tuc exhibits a complex sub-giant branch (SGB) with a faint-SGB comprising only about the 10 per cent of the cluster mass and a bright-SGB hosting at least two distinct populations. We present a spectroscopic analysis of 62 SGB stars including 21 faint-SGB stars. We thus provide the first chemical analysis of the intriguing faint-SGB population and compare its abundances with those of the dominant populations. We have inferred abundances of Fe, representative light elements C, N, Na, and Al, α elements Mg and Si for individual stars. Oxygen has been obtained by co-adding spectra of stars on different sequences. In addition, we have analysed 12 stars along the two main RGBs of 47 Tuc. Our principal results are (i) star-to-star variations in C/N/Na among RGB and bright-SGB stars; (ii) substantial N and Na enhancements for the minor population corresponding to the faint-SGB; (iii) no high enrichment in C+N+O for faint-SGB stars. Specifically, the C+N+O of the faint-SGB is a factor of 1.1 higher than the bright-SGB, which, considering random (±1.3) plus systematic errors (±0.3), means that their C+N+O is consistent within observational uncertainties. However, a small C+N+O enrichment for the faint-SGB, similar to what predicted on theoretical ground, cannot be excluded. The N and Na enrichment of the faint-SGB qualitatively agrees with this population possibly being He-enhanced, as suggested by theory. The iron abundance of the bright and faint-SGB is the same to a level of ˜0.10 dex, and no other significant difference for the analysed elements has been detected. 16. Abundance analysis of B, A and F dwarfs in the M6 open cluster: Spectrum synthesis method Kiliçoğlu, T.; Monier, R.; Fossati, L. 2012-12-01 The chemical abundances of 10 stars in the M6 open cluster (˜100 Myr) were derived using spectrum synthesis. The stars were observed using the FLAMES/GIRAFFE spectrograph. We found star-to-star variations in abundances for A type stars. General enrichment of Si, Cr, and Y were obtained for the cluster. 17. Chemical analysis of water in hydrogeology The aim of the monograph is to give complete information on the chemical analysis of water hydrogeology not only for the students program of Geology study (Bachelor degree study), Engineering Geology and Hydrogeology (Master's degree study) and Engineering Geology (doctoral level study), but also for students from other colleges and universities schools in Slovakia, as well as in the Czech Republic, dealing with the chemical composition of water and its quality, from different perspectives. The benefit would be for professionals with hydrogeological, water and environmental practices, who can find there all the necessary information about proper water sampling, the units used in the chemical analysis of water, expressing the proper chemical composition of water in its various parameters through classification of chemical composition of the water up to the basic features of physical chemistry at thermodynamic calculations and hydrogeochemical modelling. 18. Exploration of earth-abundant transition metals (Fe, Co, and Ni) as catalysts in unreactive chemical bond activations. Su, Bo; Cao, Zhi-Chao; Shi, Zhang-Jie 2015-03-17 Activation of inert chemical bonds, such as C-H, C-O, C-C, and so on, is a very important area, to which has been drawn much attention by chemists for a long time and which is viewed as one of the most ideal ways to produce valuable chemicals. Under modern chemical bond activation logic, many conventionally viewed "inert" chemical bonds that were intact under traditional conditions can be reconsidered as novel functionalities, which not only avoids the tedious synthetic procedures for prefunctionalizations and the emission of undesirable wastes but also inspires chemists to create novel synthetic strategies in completely different manners. Although activation of "inert" chemical bonds using stoichiometric amounts of transition metals has been reported in the past, much more attractive and challenging catalytic transformations began to blossom decades ago. Compared with the broad application of late and noble transition metals in this field, the earth-abundant first-row transition-metals, such as Fe, Co, and Ni, have become much more attractive, due to their obvious advantages, including high abundance on earth, low price, low or no toxicity, and unique catalytic characteristics. In this Account, we summarize our recent efforts toward Fe, Co, and Ni catalyzed "inert" chemical bond activation. Our research first unveiled the unique catalytic ability of iron catalysts in C-O bond activation of both carboxylates and benzyl alcohols in the presence of Grignard reagents. The benzylic C-H functionalization was also developed via Fe catalysis with different nucleophiles, including both electron-rich arenes and 1-aryl-vinyl acetates. Cobalt catalysts also showed their uniqueness in both aromatic C-H activation and C-O activation in the presence of Grignard reagents. We reported the first cobalt-catalyzed sp(2) C-H activation/arylation and alkylation of benzo[h]quinoline and phenylpyridine, in which a new catalytic pathway via an oxidative addition process was demonstrated 19. P-MaNGA Galaxies: Emission Lines Properties - Gas Ionisation and Chemical Abundances from Prototype Observations Belfiore, F; Bundy, K; Thomas, D; Maraston, C; Wilkinson, D; Sánchez, S F; Bershady, M; Blanc, G A; Bothwell, M; Cales, S L; Coccato, L; Drory, N; Emsellem, E; Fu, H; Gelfand, J; Law, D; Masters, K; Parejko, J; Tremonti, C; Wake, D; Weijmans, A; Yan, R; Xiao, T; Zhang, K; Zheng, T; Bizyaev, D; Kinemuchi, K; Oravetz, D; Simmons, A 2014-01-01 MaNGA (Mapping Nearby Galaxies at Apache Point Observatory) is a SDSS-IV survey that will obtain spatially resolved spectroscopy from 3600 \\AA\\ to 10300 \\AA\\ for a representative sample of over 10000 nearby galaxies. In this paper we present the analysis of nebular emission line properties using observations of 14 galaxies obtained with P-MaNGA, a prototype of the MaNGA instrument. By using spatially resolved diagnostic diagrams we find extended star formation in galaxies that are centrally dominated by Seyfert/LINER-like emission, illustrating that galaxy characterisations based on single fibre spectra are necessarily incomplete. We observe extended (up to $\\rm 1 R_{e}$) LINER-like emission in the central regions of three galaxies. We make use of the $\\rm EW(H \\alpha)$ to argue that the observed emission is consistent with ionisation from hot evolved stars. Using stellar population indices we conclude that galactic regions which are ionised by a Seyfert/LINER-like radiation field are also devoid of recent st... 20. The 2011 October Draconids Outburst. II. Meteoroid Chemical Abundances from Fireball Spectroscopy Madiedo, J M; Konovalova, N; Williams, I P; Castro-Tirado, A J; Ortiz, J L; Cabrera-Caño, J 2013-01-01 On October 8, 2011 the Earth crossed dust trails ejected from comet 21P/Giacobini-Zinner in the late 19th and early 20th Century. This gave rise to an outburst in the activity of the October Draconid meteor shower, and an international team was organized to analyze this event. The SPanish Meteor Network (SPMN) joined this initiative and recorded the October Draconids by means of low light level CCD cameras. In addition, spectroscopic observations were carried out. Tens of multi-station meteor trails were recorded, including an extraordinarily bright October Draconid fireball (absolute mag. -10.5) that was simultaneously imaged from three SPMN meteor ob-serving stations located in Andalusia. Its spectrum was obtained, showing a clear evolution in the relative intensity of emission lines as the fireball penetrated deeper into the atmosphere. Here we focus on the analysis of this remarkable spectrum, but also discuss the atmospheric trajectory, atmospheric penetration, and orbital data computed for this bolide w... 1. In-depth analysis of low abundant proteins in bovine colostrum using different fractionation techniques Nissen, Asger; Bendixen, Emøke; Ingvartsen, Klaus Lønne; 2012-01-01 Bovine colostrum is well known for its large content of bioactive components and its importance for neonatal survival. Unfortunately, the colostrum proteome is complicated by a wide dynamic range, because of a few dominating proteins that hamper sensitivity and proteome coverage achieved on low......-speed centrifugation contributed most to detection of low abundant proteins. Hence, prefractionation of colostrum prior to 2D-LC-MS/MS analysis expanded our knowledge on the presence and location of low abundant proteins in bovine colostrum....... abundant proteins. Moreover, the composition of colostrum is complex and the proteins are located within different physical fractions that make up the colostrum. To gain a more exhaustive picture of the bovine colostrum proteome and gather information on protein location, we performed an extensive pre... 2. 40 CFR 761.253 - Chemical analysis. 2010-07-01 ... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Chemical analysis. 761.253 Section 761.253 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT... analysis. (a) Extract PCBs from the standard wipe sample collection medium and clean-up the extracted... 3. PPAK Wide-field Integral Field Spectroscopy of NGC 628: II. Emission line abundance analysis Rosales-Ortega, F F; Kennicutt, R C; Sánchez, S F 2011-01-01 In this second paper of the series, we present the 2-dimensional (2D) emission line abundance analysis of NGC 628, the largest object within the PPAK Integral Field Spectroscopy (IFS) Nearby Galaxies Survey: PINGS. We introduce the methodology applied to the 2D IFS data in order to extract and deal with large spectral samples, from which a 2D abundance analysis can be later performed. We obtain the most complete and reliable abundance gradient of the galaxy up-to-date, by using the largest number of spectroscopic points sampled in the galaxy, and by comparing the statistical significance of different strong-line metallicity indicators. We find features not previously reported for this galaxy that imply a multi-modality of the abundance gradient consistent with a nearly flat-distribution in the innermost regions of the galaxy, a steep negative gradient along the disc and a shallow gradient or nearly-constant metallicity beyond the optical edge of the galaxy. The N/O ratio seems to follow the same radial behavi... 4. CD -24_17504 revisited: a new comprehensive element abundance analysis Jacobson, Heather R 2015-01-01 With [Fe/H] ~ -3.3, CD -24_17504 is a canonical metal-poor main sequence turn-off star. Though it has appeared in numerous literature studies, the most comprehensive abundance analysis for the star based on high resolution, high signal-to-noise spectra is nearly 15 years old. We present a new detailed abundance analysis for 21 elements based on combined archival Keck-HIRES and VLT-UVES spectra of the star that is higher in both spectral resolution and signal-to-noise than previous data. Our results for many elements are very similar to those of an earlier comprehensive study of the star, but we present for the first time a carbon abundance from the CH G-band feature as well as improved upper limits for neutron-capture species such as Y, Ba and Eu. In particular, we find that CD -24_17504 has [Fe/H] = -3.41, [C/Fe] = +1.10, [Sr/H] = -4.68 and [Ba/H] <= -4.46, making it a carbon enhanced metal-poor star with neutron-capture element abundances among the lowest measured in Milky Way halo stars. 5. Chemical abundances for Hf 2-2, a planetary nebula with the strongest known heavy element recombination lines Liu, X W; Zhang, Y; Bastin, R J; Storey, P J 2006-01-01 We present high quality optical spectroscopic observations of the planetary nebula (PN) Hf 2-2. The spectrum exhibits many prominent optical recombination lines (ORLs) from heavy element ions. Analysis of the H {\\sc i} and He {\\sc i} recombination spectrum yields an electron temperature of $\\sim 900$ K, a factor of ten lower than given by the collisionally excited [O {\\sc iii}] forbidden lines. The ionic abundances of heavy elements relative to hydrogen derived from ORLs are about a factor of 70 higher than those deduced from collisionally excited lines (CELs) from the same ions, the largest abundance discrepancy factor (adf) ever measured for a PN. By comparing the observed O {\\sc ii} $\\lambda$4089/$\\lambda$4649 ORL ratio to theoretical value as a function of electron temperature, we show that the O {\\sc ii} ORLs arise from ionized regions with an electron temperature of only $\\sim 630$ K. The current observations thus provide the strongest evidence that the nebula contains another previously unknown compone... 6. Chemical analysis of high purity graphite The Sub-Committee on Chemical Analysis of Graphite was organized in April 1989, under the Committee on Chemical Analysis of Nuclear Fuels and Reactor Materials, JAERI. The Sub-Committee carried out collaborative analyses among eleven participating laboratories for the certification of the Certified Reference Materials (CRMs), JAERI-G5 and G6, after developing and evaluating analytical methods during the period of September 1989 to March 1992. The certified values were given for ash, boron and silicon in the CRM based on the collaborative analysis. The values for ten elements (Al, Ca, Cr, Fe, Mg, Mo, Ni, Sr, Ti, V) were not certified, but given for information. Preparation, homogeneity testing and chemical analyses for certification of reference materials were described in this paper. (author) 52 refs 7. Spectroscopic chemical analysis methods and apparatus Hug, William F. (Inventor); Reid, Ray D. (Inventor); Bhartia, Rohit (Inventor) 2013-01-01 Spectroscopic chemical analysis methods and apparatus are disclosed which employ deep ultraviolet (e.g. in the 200 nm to 300 nm spectral range) electron beam pumped wide bandgap semiconductor lasers, incoherent wide bandgap semiconductor light emitting devices, and hollow cathode metal ion lasers to perform non-contact, non-invasive detection of unknown chemical analytes. These deep ultraviolet sources enable dramatic size, weight and power consumption reductions of chemical analysis instruments. Chemical analysis instruments employed in some embodiments include capillary and gel plane electrophoresis, capillary electrochromatography, high performance liquid chromatography, flow cytometry, flow cells for liquids and aerosols, and surface detection instruments. In some embodiments, Raman spectroscopic detection methods and apparatus use ultra-narrow-band angle tuning filters, acousto-optic tuning filters, and temperature tuned filters to enable ultra-miniature analyzers for chemical identification. In some embodiments Raman analysis is conducted along with photoluminescence spectroscopy (i.e. fluorescence and/or phosphorescence spectroscopy) to provide high levels of sensitivity and specificity in the same instrument. 8. Distinctive serum protein profiles involving abundant proteins in lung cancer patients based upon antibody microarray analysis Cancer serum protein profiling by mass spectrometry has uncovered mass profiles that are potentially diagnostic for several common types of cancer. However, direct mass spectrometric profiling has a limited dynamic range and difficulties in providing the identification of the distinctive proteins. We hypothesized that distinctive profiles may result from the differential expression of relatively abundant serum proteins associated with the host response. Eighty-four antibodies, targeting a wide range of serum proteins, were spotted onto nitrocellulose-coated microscope slides. The abundances of the corresponding proteins were measured in 80 serum samples, from 24 newly diagnosed subjects with lung cancer, 24 healthy controls, and 32 subjects with chronic obstructive pulmonary disease (COPD). Two-color rolling-circle amplification was used to measure protein abundance. Seven of the 84 antibodies gave a significant difference (p < 0.01) for the lung cancer patients as compared to healthy controls, as well as compared to COPD patients. Proteins that exhibited higher abundances in the lung cancer samples relative to the control samples included C-reactive protein (CRP; a 13.3 fold increase), serum amyloid A (SAA; a 2.0 fold increase), mucin 1 and α-1-antitrypsin (1.4 fold increases). The increased expression levels of CRP and SAA were validated by Western blot analysis. Leave-one-out cross-validation was used to construct Diagonal Linear Discriminant Analysis (DLDA) classifiers. At a cutoff where all 56 of the non-tumor samples were correctly classified, 15/24 lung tumor patient sera were correctly classified. Our results suggest that a distinctive serum protein profile involving abundant proteins may be observed in lung cancer patients relative to healthy subjects or patients with chronic disease and may have utility as part of strategies for detecting lung cancer 9. MyGIsFOS: an automated code for parameter determination and detailed abundance analysis in cool stars Sbordone, L; Bonifacio, P; Duffau, S 2013-01-01 The current and planned high-resolution, high-multiplexity stellar spectroscopic surveys, as well as the swelling amount of under-utilized data present in public archives have led to an increasing number of efforts to automate the crucial but slow process to retrieve stellar parameters and chemical abundances from spectra. We present MyGIsFOS, a code designed to derive atmospheric parameters and detailed stellar abundances from medium - high resolution spectra of cool (FGK) stars. We describe the general structure and workings of the code, present analyses of a number of well studied stars representative of the parameter space MyGIsFOS is designed to cover, and examples of the exploitation of MyGIsFOS very fast analysis to assess uncertainties through Montecarlo tests. MyGIsFOS aims to reproduce a `traditional'' manual analysis by fitting spectral features for different elements against a precomputed grid of synthetic spectra. Fe I and Fe II lines can be employed to determine temperature, gravity, microturbu... 10. Service activities of chemical analysis division Progress of the Division during the year of 1988 was described on the service activities for various R and D projects carrying out in the Institute, for the fuel fabrication and conversion plant, and for the post-irradiation examination facility. Relevant analytical methodologies developed for the chemical analysis of an irradiated fuel, safeguards chemical analysis, and pool water monitoring were included such as chromatographic separation of lanthanides, polarographic determination of dissolved oxygen in water, and automation on potentiometric titration of uranium. Some of the laboratory manuals revised were also included in this progress report. (Author) 11. Genome-wide analysis of coordinated transcript abundance during seed development in different Brassica rapa morphotypes Basnet, R.K.; Moreno Pachón, N.M.; Lin, K.; Bucher, J; Visser, R.G.F.; Maliepaard, C.A.; Bonnema, A.B. 2013-01-01 Brassica seeds are important as basic units of plant growth and sources of vegetable oil. Seed development is regulated by many dynamic metabolic processes controlled by complex networks of spatially and temporally expressed genes. We conducted a global microarray gene co-expression analysis by measuring transcript abundance of developing seeds from two diverse B. rapa morphotypes: a pak choi (leafy-type) and a yellow sarson (oil-type), and two of their doubled haploid (DH) progenies, (1) to ... 12. Detailed atmospheric abundance analysis of the optical counterpart of the IR source IRAS 16559-2957 Molina, R E 2013-01-01 We have undertaken a detailed abundance analysis of the optical counterpart of the IR source IRAS16559-2957 with the aim of confirming its possible post-AGB nature. The star shows solar metallicity and our investigation of a large number of elements including CNO and 12C/13C suggests that this object has experienced the first dredge-up and it is likely still at RGB stage. 13. Accurate and homogeneous abundance patterns in solar-type stars of the solar neighbourhood: a chemo-chronological analysis da Silva, R.; Porto de Mello, G. F.; Milone, A. C.; da Silva, L.; Ribeiro, L. S.; Rocha-Pinto, H. J. 2012-06-01 Aims: We report the derivation of abundances of C, Na, Mg, Si, Ca, Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, Sr, Y, Zr, Ba, Ce, Nd, and Sm in a sample of 25 solar-type stars of the solar neighbourhood, correlating the abundances with the stellar ages, kinematics, and orbital parameters. Methods: The spectroscopic analysis, based on data of high resolution and high signal-to-noise ratio, was differential to the Sun and applied to atomic line equivalent widths supplemented by the spectral synthesis of C and C2 features. We also performed a statistical study by using the method of tree clustering analysis, searching for groups of stars sharing similar elemental abundance patterns. We derived the stellar parameters from various criteria, with average errors of 30 K, 0.13 dex, and 0.05 dex, respectively, for Teff, log g, and [Fe/H]. The average error of the [X/Fe] abundance ratios is 0.06 dex. Ages were derived from theoretical HR diagrams and membership of the stars in known kinematical moving groups. Results: We identified four stellar groups: one having, on average, over-solar abundances (⟨[X/H]⟩ = +0.26 dex), another with under-solar abundances (⟨ [X/H] ⟩ = -0.24 dex), and two with intermediate values (⟨ [X/H] ⟩ = -0.06 and +0.06 dex) but with distinct chemical patterns. Stars sharing solar metallicity, age, and Galactic orbit possibly have non-solar abundance ratios, a possible effect either of chemical heterogeneity in their natal clouds or migration. A trend of [Cu/Fe] with [Ba/Fe] seems to exist, in agreement with previous claims in the literature, and maybe also of [Sm/Fe] with [Ba/Fe]. No such correlation involving C, Na, Mn, and Zn is observed. The [X/Fe] ratios of various elements show significant correlations with age. [Mg/Fe], [Sc/Fe], and [Ti/Fe] increase with age. [Mn/Fe] and [Cu/Fe] display a more complex behaviour, first increasing towards younger stars up to the solar age, and then decreasing, a result we interpret as possibly related to time 14. Abundance analysis of Am binaries and search for tidally driven abundance anomalies - III. HD116657, HD138213, HD155375, HD159560, HD196544 and HD204188 Stateva, I; Budaj, J 2011-01-01 We continue here the systematic abundance analysis of a sample of Am binaries in order to search for possible abundance anomalies driven by tidal interaction in these binary systems. New CCD observations in two spectral regions (6400-6500, 6660-6760 AA) of HD116657, HD138213, HD155375, HD159560, HD196544 and HD204188 were obtained. Synthetic spectrum analysis was carried out and basic stellar properties, effective temperatures, gravities, projected rotational velocities, masses, ages and abundances of several elements were determined. We conclude that all six stars are Am stars. These stars were put into the context of other Am binaries with 10 < Porb < 200 days and their abundance anomalies discussed in the context of possible tidal effects. There is clear anti-correlation of the Am peculiarities with v sin i. However, there seems to be also a correlation with the eccentricity and may be with the orbital period. The dependence on the temperature, age, mass, and microturbulence was studied as well. The ... 15. The Chemical Composition Contrast between M3 and M13 Revisited: New Abundances for 28 Giant Stars in M3 Sneden, C; Guhathakurta, P; Peterson, R C; Fulbright, J P; Sneden, Christopher; Kraft, Robert P.; Guhathakurta, Puragra; Peterson, Ruth C.; Fulbright, Jon P. 2003-01-01 We report new chemical abundances of 23 bright red giants of the globular cluster M3, based on high-resolution spectra obtained with the Keck I telescope. Combining these data with a previously-reported small sample of M3 giants obtained with the Lick 3m telescope, we compare [X/Fe] ratios for 28 M3 giants with 35 M13 giants, and with halo field stars. All three groups exhibit C depletion with advancing evolutionary state beginning at the RGB bump region. but the overall depletion in the clusters is larger than that of the field stars. The behaviors of O, Na, Mg and Al are distinctively different among the three stellar samples. Both M3 and M13 show evidence of high-temperature proton capture synthesis from the ON, NeNa, and MgAl cycles, while there is no evidence for such synthesis among halo field stars. But the degree of such extreme proton-capture synthesis in M3 is smaller than it is in M13, and no indication that O depletions are a function of advancing evolutionary state as has been claimed for M13. We... 16. A Detailed Study of Giants and Horizontal Branch Stars in M68: Atmospheric Parameters and Chemical Abundances Schaeuble, Marc; Sneden, Chris; Thompson, I B; Shectman, S A; Burley, G S 2015-01-01 In this paper, we present a detailed high-resolution spectroscopic study of post main sequence stars in the Globular Cluster M68. Our sample, which covers a range of 4000 K in $T_{eff}$, and 3.5 dex in $log(g)$, is comprised of members from the red giant, red horizontal, and blue horizontal branch, making this the first high-resolution globular cluster study covering such a large evolutionary and parameter space. Initially, atmospheric parameters were determined using photometric as well as spectroscopic methods, both of which resulted in unphysical and unexpected $T_{eff}$, $log(g)$, $\\xi_{t}$, and [Fe/H] combinations. We therefore developed a hybrid approach that addresses most of these problems, and yields atmospheric parameters that agree well with other measurements in the literature. Furthermore, our derived stellar metallicities are consistent across all evolutionary stages, with $\\langle$[Fe/H]$\\rangle$ = $-$2.42 ($\\sigma$ = 0.14) from 25 stars. Chemical abundances obtained using our methodology also ... 17. Abundance Ratios in Stars vs. Hot Gas in Elliptical Galaxies: the Chemical Evolution Modeller Point of View Pipino, A 2009-01-01 I will present predictions from chemical evolution model aimed at a self-consistent study of both optical (i.e. stellar) and X-ray (i.e.gas) properties of present-day elliptical galaxies. Detailed cooling and heating processes in the interstellar medium (ISM) are taken into and allow a reliable modelling of the SN-driven galactic wind. SNe Ia activity, in fact, may power a galactic wind lasting for a considerable amount of the galactic lifetime, even in the case for which the efficiency of energy transfer into the ISM per SN Ia event is less than unity. The model simultaneously reproduces the mass-metallicity, the colour-magnitude, the L_X - L_B and the L_X - T relations, as well as the observed trend of the [Mg/Fe] ratio as a function of sigma, by adopting the prescriptions of Pipino & Matteucci (2004) for the gas infall and star formation timescales. The "iron discrepancy", namely the too high predicted iron abundance in X-ray haloes of ellipticals compared to observations, can be solved by taking into ... 18. Entropy generation reduction through chemical pinch analysis The pinch analysis (PA) concept emerged, late '80s, as one of the methods to address the energy management in the new era of sustainable development. It was derived from combined first and second law analysis, as a technique ensuring a better thermal integration, aiming the minimization of entropy production or, equivalently, exergy destruction by heat exchanger networks (HEN). Although its ascendance from the second law analysis is questionable, the PA reveals as a widespread tool, nowadays, helping in energy savings mostly through a more rational use of utilities. Unfortunately, as principal downside, one should be aware that the global minimum entropy production is seldom attained, since the PA does not tackle the whole plant letting aside the chemical reactors or separation trains. The chemical reactor network (CRN) is responsible for large amounts of entropy generation (exergy losses), mainly due to the combined composition and temperature change. The chemical pinch analysis (CPA) concept focuses on, simultaneously, the entropy generation reduction of both CRN and HEN, while keeping the state and working parameters of the plant in the range of industrial interest. The fundamental idea of CPA is to include the CRN (through the chemical reaction heat developed in reactors) into the HEN and to submit this extended system to the PA. This is accomplished by replacing the chemical reactor with a virtual heat exchanger system producing the same amount of entropy. For an endothermic non-adiabatic chemical reactor, the (stepwise infinitesimal) supply heat δq flows from a source (an external/internal heater) to the stream undergoing the chemical transformation through the reactor, which in turn releases the heat of reaction ΔHR to a virtual cold stream flowing through a virtual cooler. For an exothermic non-adiabatic chemical reactor, the replacement is likewise, but the heat flows oppositely. Thus, in the practice of designing or retrofitting a flowsheet, in order to 19. Spectral Properties of Cool Stars: Extended Abundance Analysis of 1626 Planet Search Stars Brewer, John M; Valenti, Jeff A; Piskunov, Nikolai 2016-01-01 We present a catalog of uniformly determined stellar properties and abundances for 1626 F, G, and K stars using an automated spectral synthesis modeling procedure. All stars were observed using the HIRES spectrograph at Keck Observatory. Our procedure used a single line list to fit model spectra to observations of all stars to determine effective temperature, surface gravity, metallicity, projected rotational velocity, and the abundances of 15 elements (C, N, O, Na, Mg, Al, Si, Ca, Ti, V, Cr, Mn, Fe, Ni, & Y). Sixty percent of the sample had Hipparcos parallaxes and V-band photometry which we combined with the spectroscopic results to obtain mass, radius, and luminosity. Additionally, we used the luminosity, effective temperature, metallicity and alpha-element enhancement to interpolate in the Yonsei-Yale isochrones to derive mass, radius, gravity, and age ranges for those stars. Finally, we determined new relations between effective temperature and macroturbulence for dwarfs and subgiants. Our analysis a... 20. High-Resolution Abundance Analysis of Very Metal-Rich Stars in the Solar Neighborhood Castro, S; Grenon, Michel; Barbuy, B; McCarthy, J K 1997-01-01 We report detailed analysis of high-resolution spectra of nine high velocity metal-rich dwarfs in the solar neighborhood. The stars are super metal-rich and 5 of them have [Fe/H]>=+0.4, making them the most metal-rich stars currently known. We find that alpha-elements decrease with increasing metallicity; s-elements are underabundant by about [s-elements/Fe]=-0.3. While exceeding the [Fe/H] of current bulge samples, the chemistry of these stars has important similarities and differences. The near-solar abundances of the alpha-capture elements places these stars on the metal-rich extension of McWilliam & Rich (1994) [ApJS, 91, 749], but their s-process abundances are much lower than those of the bulge giants. These low s-process values have been interpreted as the hallmark of an ancient stellar population. 1. Abundance analysis of prime B-type targets for asteroseismology I. Nitrogen excess in slowly-rotating beta Cephei stars Morel, T; Aerts, C; Neiner, C; Briquet, M 2006-01-01 We present the results of a detailed NLTE abundance study of nine beta Cephei stars, all of them being prime targets for theoretical modelling: gamma Peg, delta Cet, nu Eri, beta CMa, xi1 CMa, V836 Cen, V2052 Oph, beta Cep and DD (12) Lac. The following chemical elements are considered: He, C, N, O, Mg, Al, Si, S and Fe. Our abundance analysis is based on a large number of time-resolved, high-resolution optical spectra covering in most cases the entire oscillation cycle of the stars. Nitrogen is found to be enhanced by up to 0.6 dex in four stars, three of which have severe constraints on their equatorial rotational velocity, \\Omega R, from seismic or line-profile variation studies: beta Cep (\\Omega R~26 km/s), V2052 Oph (\\Omega R~56 km/s), delta Cet (\\Omega R < 28 km/s) and xi1 CMa (\\Omega R sin i < 10 km/s). The existence of core-processed material at the surface of such largely unevolved, slowly-rotating objects is not predicted by current evolutionary models including rotation. We draw attention to ... 2. Abundance analysis, spectral variability, and search for the presence of a magnetic field in the typical PGa star HD 19400 Hubrig, S.; Castelli, F.; González, J. F.; Carroll, T. A.; Ilyin, I.; Schöller, M.; Drake, N. A.; Korhonen, H.; Briquet, M. 2014-08-01 The aim of this study is to carry out an abundance determination, to search for spectral variability and for the presence of a weak magnetic field in the typical PGa star HD 19400. High-resolution, high signal-to-noise High Accuracy Radial-velocity Planet Searcher (HARPS) spectropolarimetric observations of HD 19400 were obtained at three different epochs in 2011 and 2013. For the first time, we present abundances of various elements determined using an ATLAS12 model, including the abundances of a number of elements not analysed by previous studies, such as Ne I, Ga II, and Xe II. Several lines of As II are also present in the spectra of HD 19400. To study the variability, we compared the behaviour of the line profiles of various elements. We report on the first detection of anomalous shapes of line profiles belonging to Mn and Hg, and the variability of the line profiles belonging to the elements Hg, P, Mn, Fe, and Ga. We suggest that the variability of the line profiles of these elements is caused by their non-uniform surface distribution, similar to the presence of chemical spots detected in HgMn stars. The search for the presence of a magnetic field was carried out using the moment technique and the Singular Value Decomposition (SVD) method. Our measurements of the magnetic field with the moment technique using 22 Mn II lines indicate the potential existence of a weak variable longitudinal magnetic field on the first epoch. The SVD method applied to the Mn II lines indicates = -76 ± 25 G on the first epoch, and at the same epoch the SVD analysis of the observations using the Fe II lines shows = -91 ± 35 G. The calculated false alarm probability values, 0.008 and 0.003, respectively, are above the value 10-3, indicating no detection. 3. TOPoS . II. On the bimodality of carbon abundance in CEMP stars Implications on the early chemical evolution of galaxies Bonifacio, P.; Caffau, E.; Spite, M.; Limongi, M.; Chieffi, A.; Klessen, R. S.; François, P.; Molaro, P.; Ludwig, H.-G.; Zaggia, S.; Spite, F.; Plez, B.; Cayrel, R.; Christlieb, N.; Clark, P. C.; Glover, S. C. O.; Hammer, F.; Koch, A.; Monaco, L.; Sbordone, L.; Steffen, M. 2015-07-01 Context. In the course of the Turn Off Primordial Stars (TOPoS) survey, aimed at discovering the lowest metallicity stars, we have found several carbon-enhanced metal-poor (CEMP) stars. These stars are very common among the stars of extremely low metallicity and provide important clues to the star formation processes. We here present our analysis of six CEMP stars. Aims: We want to provide the most complete chemical inventory for these six stars in order to constrain the nucleosynthesis processes responsible for the abundance patterns. Methods: We analyse both X-Shooter and UVES spectra acquired at the VLT. We used a traditional abundance analysis based on OSMARCS 1D local thermodynamic equilibrium (LTE) model atmospheres and the turbospectrum line formation code. Results: Calcium and carbon are the only elements that can be measured in all six stars. The range is -5.0 ≤ [Ca/H] UIP) stars. No lithium is detected in the spectrum of SDSS J1742+2531 or SDSS J1035+0641, which implies a robust upper limit of A(Li) UIP stars shows a large star-to-star scatter in the [X/Ca] ratios for all elements up to aluminium (up to 1 dex), but this scatter drops for heavier elements and is at most of the order of a factor of two. We propose that this can be explained if these stars are formed from gas that has been chemically enriched by several SNe, that produce the roughly constant [X/Ca] ratios for the heavier elements, and in some cases the gas has also been polluted by the ejecta of a faint SN that contributes the lighter elements in variable amounts. The absence of lithium in four of the five known unevolved UIP stars can be explained by a dominant role of fragmentation in the formation of these stars. This would result either in a destruction of lithium in the pre-main-sequence phase, through rotational mixing or to a lack of late accretion from a reservoir of fresh gas. The phenomenon should have varying degrees of efficiency. Based on observations obtained at ESO Paranal 4. Abundance analysis of the supergiant stars HD 80057 and HD 80404 based on their UVES Spectra Tanrıverdi, Taner 2015-01-01 This study presents elemental abundances of the early A-type supergiant HD 80057 and the late A-type supergiant HD80404. High resolution and high signal-to-noise ratio spectra published by the UVES Paranal Observatory Project (Bagnulo et al., 2003) were analysed to compute their elemental abundances using ATLAS9 (Kurucz, 1993, 2005; Sbordone et al., 2004). In our analysis we assumed local thermodynamic equilibrium. The atmospheric parameters of HD 80057 used in this study are from Firnstein & Przybilla (2012), and that of HD80404 are derived from spectral energy distribution, ionization equilibria of Cr I/II and Fe I/II, and the fits to the wings of Balmer lines and Paschen lines as Teff = 7700 +/- 150 K and log g=1.60 +/- 0.15 (in cgs). The microturbulent velocities of HD 80057 and HD 80404 have been determined as 4.3 +/- 0.1 and 2.2 +/- 0.7 km s^-1 . The rotational velocities are 15 +/-1 and 7 +/- 2 km s^-1 and their macroturbulence velocities are 24 +/-2 and 2+/-1 km s^-1 . We have given the abundances... 5. Spectral matching for abundances and clustering analysis of stars on the giant branches of {\\omega} Centauri Simpson, Jeffrey D; Worley, C C 2012-01-01 We have determined stellar parameters and abundances for 221 giant branch stars in the globular cluster {\\omega} Centauri. A combination of photometry and lower-resolution spectroscopy was used to determine temperature, gravity, metallicity, [C/Fe], [N/Fe] and [Ba/Fe]. These abundances agree well with those found by previous researchers and expand the analysed sample of the cluster. k-means clustering analysis was used to group the stars into four homogeneous groups based upon these abundances. These stars show the expected anticorrelation in [C/Fe] to [N/Fe]. We investigated the distribution of CN-weak/strong stars on the colour-magnitude diagram. Asymptotic giant branch stars, which were selected from their position on the colour-magnitude diagram, were almost all CN-weak. This is in contrast to the red giant branch where a large minority were CN-strong. The results were also compared with cluster formation and evolution models. Overall, this study shows that statistically significant elemental and evolutio... 6. Analysis of boron concentration deviation and 10B abundance evolution in primary loop of pressurized nuclear plants The 10B abundance evolution under two conditions, i.e., with and without boronizing, is calculated and analyzed with theoretical derivation method, and the evolution pattern of 10B abundance for one cycle in PWR is provided. The comparison of the calculated and measured 10B abundance shows that the abundance equation considering the boronizing is accurate. With this, the theoretical boron concentration provided by the fuel management software can be corrected and validated. According to the equation and analysis method, the boron concentration deviation problem could be well understood or even solved. (authors) 7. Unprecedented accurate abundances: signatures of other Earths? Melendez, J.; Asplund, M.; Gustafsson, B.; Yong, D.; Ramirez, I. 2009-01-01 For more than 140 years the chemical composition of our Sun has been considered typical of solar-type stars. Our highly differential elemental abundance analysis of unprecedented accuracy (~0.01 dex) of the Sun relative to solar twins, shows that the Sun has a peculiar chemical composition with a ~20% depletion of refractory elements relative to the volatile elements in comparison with solar twins. The abundance differences correlate strongly with the condensation temperatures of the elements... 8. CHEMICAL ENRICHMENT IN THE FAINTEST GALAXIES: THE CARBON AND IRON ABUNDANCE SPREADS IN THE BOOeTES I DWARF SPHEROIDAL GALAXY AND THE SEGUE 1 SYSTEM We present an AAOmega spectroscopic study of red giants in the ultra-faint dwarf galaxy Booetes I (MV ∼ -6) and the Segue 1 system (MV ∼ -1.5), either an extremely low luminosity dwarf galaxy or an unusually extended globular cluster. Both Booetes I and Segue 1 have significant abundance dispersions in iron and carbon. Booetes I has a mean abundance of [Fe/H] = -2.55 ± 0.11 with an [Fe/H] dispersion of σ = 0.37 ± 0.08, and abundance spreads of Δ[Fe/H] = 1.7 and Δ[C/H] = 1.5. Segue 1 has a mean of [Fe/H] = -2.7 ± 0.4 with [Fe/H] dispersion of σ = 0.7 ± 0.3, and abundances spreads of Δ[Fe/H] = 1.6 and Δ[C/H] = 1.2. Moreover, Segue 1 has a radial-velocity member at four half-light radii that is extremely metal-poor and carbon-rich, with [Fe/H] = -3.5, and [C/Fe] = +2.3. Modulo an unlikely non-member contamination, the [Fe/H] abundance dispersion confirms Segue 1 as the least-luminous ultra-faint dwarf galaxy known. For [Fe/H] V = -5. The very low mean iron abundances and the high carbon and iron abundance dispersions in Segue 1 and Booetes I are consistent with highly inhomogeneous chemical evolution starting in near zero-abundance gas. These ultra-faint dwarf galaxies are apparently surviving examples of the very first bound systems. 9. Analysis of Chemical Technology Division waste streams This document is a summary of the sources, quantities, and characteristics of the wastes generated by the Chemical Technology Division (CTD) of the Oak Ridge National Laboratory. The major contributors of hazardous, mixed, and radioactive wastes in the CTD as of the writing of this document were the Chemical Development Section, the Isotopes Section, and the Process Development Section. The objectives of this report are to identify the sources and the summarize the quantities and characteristics of hazardous, mixed, gaseous, and solid and liquid radioactive wastes that are generated by the Chemical Technology Division (CTD) of the Oak Ridge National Laboratory (ORNL). This study was performed in support of the CTD waste-reduction program -- the goals of which are to reduce both the volume and hazard level of the waste generated by the division. Prior to the initiation of any specific waste-reduction projects, an understanding of the overall waste-generation system of CTD must be developed. Therefore, the general approach taken in this study is that of an overall CTD waste-systems analysis, which is a detailed presentation of the generation points and general characteristics of each waste stream in CTD. The goal of this analysis is to identify the primary waste generators in the division and determine the most beneficial areas to initiate waste-reduction projects. 4 refs., 4 figs., 13 tabs 10. Chemical detection, identification, and analysis system The chemical detection, identification, and analysis system (CDIAS) has three major goals. The first is to display safety information regarding chemical environment before personnel entry. The second is to archive personnel exposure to the environment. Third, the system assists users in identifying the stage of a chemical process in progress and suggests safety precautions associated with that process. In addition to these major goals, the system must be sufficiently compact to provide transportability, and it must be extremely simple to use in order to keep user interaction at a minimum. The system created to meet these goals includes several pieces of hardware and the integration of four software packages. The hardware consists of a low-oxygen, carbon monoxide, explosives, and hydrogen sulfide detector; an ion mobility spectrometer for airborne vapor detection; and a COMPAQ 386/20 portable computer. The software modules are a graphics kernel, an expert system shell, a data-base management system, and an interface management system. A supervisory module developed using the interface management system coordinates the interaction of the other software components. The system determines the safety of the environment using conventional data acquisition and analysis techniques. The low-oxygen, carbon monoxide, hydrogen sulfide, explosives, and vapor detectors are monitored for hazardous levels, and warnings are issued accordingly
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8183943629264832, "perplexity": 4530.926795477606}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988717783.68/warc/CC-MAIN-20161020183837-00313-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.cheenta.com/the-numbered-chessboard-tomato-subjective-188/
• No products in the cart. # The Numbered Chessboard (TOMATO Subjective 188) ##### Reasoning, Arithmetic. Problem:  Consider the squares of an $$8 X 8$$ chessboard filled with the numbers 1 to 64 as in the figure below. If we choose 8 squares with the property that there is exactly one from each row and exactly one from each column, and add up the numbers in the chosen squares, show that the sum always adds up to 260. Solution: The problem asks us to choose numbers selectively such that they are from unique rows and columns. We write the numbers in the table in such a manner that helps us in our calculations. This is how it will be done: Let the number in the $$i^{th}$$ row and $$j^{th}$$ column be $$x$$. If we carefully observe the table we find an intuitive way of representing $$x$$ as follows: $$x = 8*(i-1) + j$$ if $$x$$ is the element in the $$i^{th}$$ row and $$j^{th}$$ column. Now all that is left to do is sum up all such numbers such that no two $$j^{t}$$ or $$i^{th}$$ value is same. There for the total sum is: $$(8*(i_1-1)+ j_1) + (8*(i_2-1) + j_2) + . . . + (8*(i_8-1) + j_8)$$ August 25, 2016
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8213451504707336, "perplexity": 166.15184689911894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813818.15/warc/CC-MAIN-20180221222354-20180222002354-00167.warc.gz"}
https://socratic.org/questions/how-do-you-find-the-derivative-of-ln-sqrt-sin-2x
Calculus Topics # How do you find the derivative of ln(sqrt(sin(2x)))? Jan 3, 2016 #### Answer: Chain rule: $\frac{\mathrm{dy}}{\mathrm{dx}} = \frac{\mathrm{dy}}{\mathrm{du}} \frac{\mathrm{du}}{\mathrm{dv}} \frac{\mathrm{dv}}{\mathrm{dw}} \frac{\mathrm{dw}}{\mathrm{dx}}$ #### Explanation: To do so, we'll rename $f \left(x\right) = \ln \left(u\right)$, then $u = \sqrt{v}$, $v = \sin \left(w\right)$ and finally $w = 2 x$ $\frac{\mathrm{dy}}{\mathrm{dx}} = \frac{1}{u} \left(\frac{1}{2 \sqrt{v}}\right) \cos \left(w\right) \left(2\right) = \frac{\cancel{2} \cos \left(w\right)}{\cancel{2} u \sqrt{v}}$ Substituting $u$, $v$ and $w$, in this order: $\frac{\mathrm{dy}}{\mathrm{dx}} = \frac{\cos \left(w\right)}{\sqrt{v} \sqrt{v}} = \frac{\cos \left(w\right)}{v} = \frac{\cos \left(w\right)}{\sin} \left(w\right) = \cos \frac{2 x}{\sin} \left(2 x\right) = \textcolor{g r e e n}{\cot \left(2 x\right)}$ ##### Impact of this question 922 views around the world You can reuse this answer Creative Commons License
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 10, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9871207475662231, "perplexity": 6525.617605555257}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987835748.66/warc/CC-MAIN-20191023173708-20191023201208-00029.warc.gz"}
http://mathhelpforum.com/math-topics/174234-electric-field.html
# Math Help - electric field 1. ## electric field charges of 1micro coloumb 8 microcoloumb and 27 micro coloumb and so on are placed at x=1,2...10. then determine the electric field at origin 2. Originally Posted by prasum charges of 1micro coloumb 8 microcoloumb and 27 micro coloumb and so on are placed at x=1,2...10. then determine the electric field at origin That's not the way we work around here. With 147 posts, I'd have thought you'd have figured that out by now. Follow forum rule # 11 and show some effort. What have you done so far? What ideas have you had? 3. i have supposed a test charge at origin and then used the principle of superimposition 4. And what did you get? 5. is this right or wrong 6. It's a perfectly good way to proceed. I'm asking what your result was. 7. 55 8. What are the units and direction? 9. units are micro coloumb/m^2 how will i find the direction 10. I see where your 55 comes from, and I agree there's a 55 in your answer. Shouldn't there also be a $1/(4\pi\varepsilon_{0})$ in your answer? As for the direction, all the point charges contribute to the electric field in the same direction. Check this definition for the direction of the unit vector $\hat{\mathbf{r}}.$ The units of the electric field, after all the cancellations have taken place, is newtons per coulomb in the SI system.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8460671305656433, "perplexity": 1113.708789412851}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776426734.39/warc/CC-MAIN-20140707234026-00062-ip-10-180-212-248.ec2.internal.warc.gz"}
http://mathematica.stackexchange.com/questions/40837/how-do-i-switch-between-two-notebooks-in-mathematica-9-interface
# How do I switch between two notebooks in Mathematica 9 interface? [duplicate] This question already has an answer here: How do I switch between two notebook windows in the Mathematica 9 interface? I would like to use a keyboard shortcut such as Ctrl+Tab or Alt+ Tab as with windows or tabs in programs like Firefox. - ## marked as duplicate by Yves Klett, Vitaliy Kaurov, Michael E2, Ajasja, ArtesJan 21 '14 at 13:59 If you're on a Mac, the default is usually CMD + . You can remap the keyboard shortcuts via the links in Öskå's comment above. FWIW, you can also create your own palette using the Button` command and then install it in Mathematica via the Palette menu; this is useful if you have a variety of actions you want to take, for example, rearranging / resizing windows, restyling cells, ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20673012733459473, "perplexity": 3265.4377554273105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398445142.9/warc/CC-MAIN-20151124205405-00079-ip-10-71-132-137.ec2.internal.warc.gz"}
https://www.math.princeton.edu/events/act-globally-compute-locally-group-actions-fixed-points-and-localization-2014-11-06t200011
# Act globally, compute locally: group actions, fixed points, and localization - Tara Holm , Cornell and the IAS Fine Hall 314 Localization is a topological technique that allows us to make global equivariant computations in terms of local data at the fixed points. For example, we may compute a global integral by summing integrals at each of the fixed points. Or, if we know that the global integral is zero, we conclude that the sum of the local integrals is zero. This often turns topological questions into combinatorial ones and vice versa. I will give an overview of how this technique arises in symplectic geometry, and some details about some recent projects in this area.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9776461720466614, "perplexity": 368.604132842517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948522343.41/warc/CC-MAIN-20171213084839-20171213104839-00077.warc.gz"}
https://www.ncbi.nlm.nih.gov/pubmed/9252108?dopt=Abstract
Format Choose Destination Biochim Biophys Acta. 1997 Jul 18;1340(2):215-26. # A prediction of DPP IV/CD26 domain structure from a physico-chemical investigation of dipeptidyl peptidase IV (CD26) from human seminal plasma. ### Author information 1 Department of Pharmaceutical Sciences, University of Antwerp (U.I.A.), Wilrijk, Belgium. [email protected] ### Abstract Human DPP IV, isolated from seminal plasma by means of immobilised adenosine deaminase, occurs in different forms which are distinguishable by net charge and native molecular weight. Charge differences arise primarily from different degrees of glycosylation containing various amounts of sialic acid. The majority of DPP IV isolated from total seminal plasma consists of the extracellular part of the protein starting at Gly-31. It is a very stable protein resisting high concentrations of denaturant. Unfolding experiments under reducing conditions are indicative of the existence of at least two domains which function independently. One of these domains is highly stabilised by disulfide bonds. Disruption of the disulfide bonds does not affect the activity, the dimeric state nor the adenosine deaminase binding properties of the protein but renders it more susceptible to proteolysis. The low-angle X-ray scattering spectrum is consistent with a model for a protein containing two subunits, each composed of three domains linked by flexible regions with low average mass. The secondary structure composition, determined by FTIR spectrometry, indicates that 45% of the protein consists of beta-sheets, which is higher than expected from computed secondary structure predictions. Our results provide compelling experimental evidence for the three-domain structure of the extracellular part of DPP IV. PMID: 9252108 DOI: 10.1016/s0167-4838(97)00045-9 [Indexed for MEDLINE]
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.84083092212677, "perplexity": 6394.932392070412}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514576047.85/warc/CC-MAIN-20190923043830-20190923065830-00320.warc.gz"}
http://mathoverflow.net/questions/101828/why-limit-of-discrete-series-representation
# Why limit of discrete series representation? In what sense is the limit of discrete series representation of $SL(2, \mathbb{R})$ a limit of discrete series representations? Where does the name origin from? - ... for what? $\:$ –  Ricky Demer Jul 10 '12 at 7:28 Wikipedia says "Limit of discrete series representations are tempered representations, which means roughly that they only just fail to be discrete series representations." Does that answer the question? (I wouldn't know, just searched on Wikipedia to see what the term means). –  George Lowther Jul 10 '12 at 12:28 Dear Mrc, As far as I know, Paul Garret's explanation is correct; limits of discrete series have the same character formulas etc. as discrete series, but one has to allow the parameter to pass to a "limit" inside the wall of a Weyl chamber. Regards, –  Emerton Jul 10 '12 at 14:51 I vaguely recall something about a topology on the space of irreducible unitary representations - this would be similar to the topology on the Pontrjagin dual of a LCA group. Then perhaps the limits of discrete series representations are actually limits in the topological sense? –  MTS Jul 10 '12 at 17:40 @MTS, this topology is called the Fell topology, it turns out that this topology is evil (i.e. not Hausdorff). It turns out that in this topology, principal series reps. converge to limits of discrete series as $it\to 0$. There's a famous "picture" of the $SL_{2}$ reps. as a "double cross" in the complex plane which represents the unitary dual, see for example in Lang's SL2(R) book, p.124. –  Asaf Jul 10 '12 at 19:02 Here is the explanation I know, just for $SL_2$. The discrete series rep. have realizations in the Hardy spaces $H_n$ which have the norm - $$\|f\|_ n ^2 = n\int_{D}|f(z)|^2(1-|z|^{2})^{(n-1)}dxdy$$ notice this norm is scaled a bit differently than usual. The limit of discrete series is realized inside $H_2$ with the norm $\|f\| _ 2 ^{2}=\frac{1}{2\pi}\sup_{0\leq r<1} \int_{0}^{1}|f(re^{2\pi it})|^{2}dt$ So from what I know (which probably has nothing to do with rep. theory), one can consider the Hardy spaces with continuous parameter say $r$, with the norm $\|f\| _ r ^{2}=r\int_{D}|f(z)|^2(1-|z|^{2})^{(r-1)}dxdy$. It's not hard to show that for $H_{r}$ you have a complete orthonormal family $f_{n,r}=$\frac{\Gamma(r+n+1)}{n!\Gamma(r+1)}$^{1/2}z^n$. Then one can show that $H_{2}=\{f\in \cap_{r>0} H_{r} \mid \lim_{r\to 0}\|f\| _ r ^{2} \text{ exists and finite} \}$. It might be interesting to try to work it out in different models for the representations. - This is a nice explanation and exactly in the spirit, I was looking for. $n$ is the highest weight going to one, $n$ real values makes sense representation theoretic, if we consider the universal covering of $\mathrm{SL}_2(\mathbb{R})$ and modular form people rather work with the upperhalfplane model for the Hardy-space. Thank you. –  Marc Palm Jul 10 '12 at 14:25 These repns are not actually "discrete series", in that they do not appear in $L^2(G)$. Yet their construction/description is completely parallel to that of the discrete family of repns called "discrete series". Since the relevant parameter is discrete, it is hard to conjure up any "limit-taking process", indeed, in a mathematical sense. But in a colloquial sense, since the parameter (for $SL_2(\mathbb R)$ just the "weight") takes a more extreme value for these repns than for "genuine discrete series", it's not completely unreasonable to refer to them in the form " discrete series". I couldn't give a citation off-hand, but probably Harish-Chandra and others used this term in the 1950s, also applied to more general (reductive and semi-simple) Lie groups. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9211885333061218, "perplexity": 473.79876848786444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928757.11/warc/CC-MAIN-20150521113208-00242-ip-10-180-206-219.ec2.internal.warc.gz"}
https://www.coursehero.com/file/pvppqvr/pts-Given-that-the-MPC8-we-know-that-2-of-every-dollar-increase-in-gross-income/
# Pts given that the mpc8 we know that 2 of every • Homework Help • m.calcasas • 18 • 55% (11) 6 out of 11 people found this document helpful This preview shows page 3 - 7 out of 18 pages. Question 53 / 3 ptsGiven that the MPC=.8, we know that .2 of every dollar increase in gross income is saved. Since the increase in income is \$100, we know the leakage due to savings is:\$10020 cents \$20\$80We have three leakages: tracing out 100: for every additional dollar in gross income, the consumer saves 20 cents since the mpc = .8. The government gets 25 cents since the tax rate is .25. And finally, 20 cents is leaked out to the purchase of imported goods. Multiplying by 100 we have the following: Y up by 100, savings up by 20, taxes up by 25, imports up by 20, consumption up by 35 Question 63 / 3 ptsTo find out how much consumption increases we need to take the increase in income (\$100) and subtract out the leakages. So take the \$100 and subtract your answers from #3, #4 and #5 above. When income increases by \$100, consumption increases by: Question 73 / 3 ptsWhat would happen to the multiplier if the mpi rises to .25? Round to 2 decimal places.the new multiplier is .65 Question 83 / 3 ptsWhat would happen to the size of the leakage if the mpi rises to .25? Question 94 / 4 ptsIn this question, we are going dig deeper into the Taylor Rule and it variants (modifications).Federal Reserve data from October 1, 2011:Potential GDP growth y* = 1.7%Actual GDP Growth yA= 2.0% Inflation PCE (actual inflation) πA= 2.6% Effective Federal funds Rate = .07%As Taylor assumed, we assume the equilibrium real rate of interest r* = 2% and the optimal inflation rate, the target inflation rate π* is also equal to 2%.The standard (original) Taylor rule formula: ifTR= r* + πA+ 0.5[πA- π*] + 0.5 [ yA- y*]Using the 'standard' Taylor rule from above and using the data provided, what is the federal funds rate implied by the 'standard' Taylor Rule?3.33% 5.05%1.56%2.04%Data as of 4th quarter 2011Original Taylor Rule: iffTR= r* + πA+ 0.5[πA- π*] + 0.5 [ yA- y*]5.05 = 2 + 2.6 + .5[2.6 - 2] + .5[2.0 - 1.7]
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8826763033866882, "perplexity": 4572.778312425735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055601.25/warc/CC-MAIN-20210917055515-20210917085515-00277.warc.gz"}
https://discourse.mc-stan.org/t/rstan-vectorized-modelling-efficiency-and-its-components-setup/3634
# Rstan vectorized modelling efficiency and its components setup Hi all, My model currently contains ~1000 variables y ~ normal ( beta1x1[n] + beta2x2[n] + … , sigma) with some terms defined as beta * x ^ parameter beta * 1 / (1 + exp(-0.05 * ((x - parameter1)) / parameter2)) - (1 / (1 + exp(-0.05 * (( 0 - parameter1)) / parameter2))) From the manual it says that vectorized form is faster for Rstan, which prompted me to converting my program and data. Would it certainly be faster for this case where there would be several terms and parameters? I tried running it unvectorized and the model took too long to setup. Another concern would be constraining beta vectors element-wise. Since rstan can only define 1 set of bounds for a vector, is this a valid work-around where we retain beta as scalar under parameters and under model define a vector which will group a set of beta of the predictors (got the idea from here)? i.e. parameters{ real <lower=0.5, upper=1.5> beta1 real <lower=0.2, upper=2.5> beta2 } model{ beta1 ~ normal (mean, sd) beta2 ~ normal (mean, sd) vector [K] beta_vector = [beta1, beta2, …]’ y ~ normal(…) } Using rstan 2.17.3 a Windows 7 with 8GB ram. Would be glad for any help/clarification. Thanks! Yes, that is legal. The word vectorization means different things in different contexts for Stan but the one you want to concentrate on is calling the likelihood as few times as possible or conversely calling it with the largest inputs as possible, ideally just ``````target += normal_lpdf(y | X * beta, sigma); `````` or something like that. Thank you, Ben. Will try it out.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8456271290779114, "perplexity": 4278.752231776284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662509990.19/warc/CC-MAIN-20220516041337-20220516071337-00581.warc.gz"}
https://www.physicsforums.com/threads/the-continuum-hypothesis-and-number-e.976243/
# I The Continuum Hypothesis and Number e • Start date #### Quasimodo 14 1 Summary The Continuum Hypothesis and Number e Summary: The Continuum Hypothesis and Number e Now, I must ask a very stupid question: When taking: $$\lim_{_{n \to \infty} } (1+\frac{1}{n})^n=e\\$$ the $n$ we use take its values from the set: $\left\{ 1,2,3 ... \right\}$ which has cardinality $\aleph_0$, which is equivalent maybe, I say maybe to writing: $$\ (1+\frac{1}{\aleph_0})^{\aleph_0}=e\\$$ Upon: $$\lim_{_{n \to \infty} } (1+\frac{1}{n})^{2n}=e^2\\$$ we take, $$\ (1+\frac{1}{\aleph_0})^{2\aleph_0}=e^2\\$$ So, since two equal power bases give two different results, we have to assume that their exponents are different hence: $$2\aleph_0 > \aleph_0$$ #### BvU Homework Helper 12,175 2,712 The question being "since the continuum hypothesis says that the smallest cardinal number $>\aleph_0$ is $\mathfrak c$, have I now proven that ${\mathfrak c} = 2 \aleph_0$ " ? With a possible successor "so with ${\mathfrak c} = 2^ {\aleph_0} = 2 \aleph_0$ " ? As you guessed: no and no. And you may not write $\ (1+\frac{1}{\aleph_0})^{\aleph_0}=e \$. Homework Helper 1,492 32 ... which is equivalent maybe, I say maybe to writing: $$\ (1+\frac{1}{\aleph_0})^{\aleph_0}=e\\$$ No. Just no. "The Continuum Hypothesis and Number e" Replies 2 Views 2K Replies 2 Views 747 Replies 1 Views 2K Replies 4 Views 2K Replies 8 Views 2K Replies 2 Views 3K Replies 2 Views 647 Replies 17 Views 43K ### Physics Forums Values We Value Quality • Topics based on mainstream science • Proper English grammar and spelling We Value Civility • Positive and compassionate attitudes • Patience while debating We Value Productivity • Disciplined to remain on-topic • Recognition of own weaknesses • Solo and co-op problem solving
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.902984619140625, "perplexity": 4989.442253776433}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573071.65/warc/CC-MAIN-20190917121048-20190917143048-00221.warc.gz"}
http://crypto.stackexchange.com/questions/17820/how-do-i-decide-what-mode-to-use
# How do I decide what mode to use? I'll be using AES from OpenSSL. I understand why I don't want to use ECB from reading pages like this Wikipedia article, which has a great example of what happens when you attempt to encrypt with electronic codebook. But what I'm missing, is some sort of comparison table with all the other modes so I understand the positives and negatives of the various modes. Where can I find this information so I can make an informed decision on what mode to use? Update 1: Looks like OpenSSL's EVP api (which I believe is the recommended API when using OpenSSL?) only supports 2 modes [OpenSSL][1] – GCM and CCM – which drastically reduces my choices. Update 2: Unsurprisingly, that OpenSSL page is somewhat misleading or incomplete. Looking at /usr/include/openssl/evp.h I see there are 30+ variations on AES to choose from, including: • EVP_aes_*_ecb() • EVP_aes_*_cbc() • EVP_aes_*_cfb1() • EVP_aes_*_ofb() • EVP_aes_*_ctr() • EVP_aes_*_xts() • ...and many more! So I'm back to reading about all the different modes so I can make a semi-intelligent choice as to which I should be using. - The reason such a table does not exist (and should not) is that there are so many different use cases and a table could only cover some small subset of use cases. For example, block ciphers were traditionally used to provide confidentiality. Many use cases, however, would also require integrity. Do you have some other way to provide integrity (e.g., HMAC) or do you need that to come directly from the block cipher? The answer to that question will completely change which mode would be the recommended. –  mikeazo Jun 22 '14 at 0:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3306950330734253, "perplexity": 1082.0812034271773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645199297.56/warc/CC-MAIN-20150827031319-00340-ip-10-171-96-226.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/62696/what-is-the-explanation-for-the-elements-of-this-set/62702
# What is the explanation for the elements of this set? From Stephen Abbott's Understanding Analysis (some parts omitted): Exercise 1: (a) Using the particular set $A = \{a,b,c\}$, exhibit two different $1-1$ mappings from $A$ into $P(A)$. (b) Letting $B = \{1,2,3,4\}$, produce an example of a $1-1$ map $g: B \to P(B)$. Exercise 2: Construct $B$ using the following rule. For each element $a \in A$, consider the subset $f(a)$. This subset of $A$ may contain the element $a$ or it may not. This depends on the function $f$. If $f(a)$ does not contain $a$, then we include $a$ in our set $B$. More precisely, let $B = \{a \in A: a \notin f(a) \}$. Return to the particular functions constructed in Exercise 1 and construct the subset $B$ that results using the preceding rule. In each case, note that $B$ is not in the range of the function used. Solution Exercise 1: (a) Given set $A = \{a,b,c\}$, $A$ can be mapped in a $1-1$ fashion into $P(A)$ in many ways. For example, we could write (i) $a \to \{a\}$, $b \to \{ a,c\}$, $c \to \{a,b,c\}$. As another example we might say (ii) $a \to \{b,c\}$, $b \to \emptyset$, $c \to \{a,c\}$. (b) An example of a $1-1$ mapping from $B$ to $P(B)$ is: $1 \to \{1\}$, $2 \to \{ 2,3,4\}$, $3 \to \{1,2,4\}$, $4 \to \{2,3\}$. Solution Exercise 2: For the example in (a) (i), the set $B = \{b\}$. For example (ii) we get $B=\{a,b\}$. In part (b) we find $B = \{3,4\}$. In every case, the set $B$ fails to be in the range of the function that we defined. Question(s): Could someone please help me understand what I am missing? I don't think I had any trouble with the first exercise, but my answer to the second was different. For (a) (i), I would have thought that $B = \{b,c\}$. For (a) (ii), I had $B = \{ a,b,c\}$. For (b), I had $\{ 2,3,4\}$. I can't really seem to find a pattern, or figure out what I am doing wrong. I thought I might be getting confused with distinctions between elements and "the sets containing" elements, but I am not sure. For instance, with (a) (i) is the image $\{ \{ a\}, \{ a,c\} \{ a,b,c\}\}$ ? Is $a$ getting mapped to $a$ or to "the set containing $a$" which is written as $\{ a\}$? If it were the latter, I would think $B = \{ a,b,c\}$ instead, but this doesn't seem to make much sense... So, as I said, if someone could help explain it all to me, I would really appreciate it. Thanks! Edit: The solutions in the block-text are not my own, they were those provided by the author... - It was poor choice of notation by the authors to use $B$ in 2 related ways like this. –  Srivatsan Sep 7 '11 at 23:00 @Sriv: Yes, the notation is awful. Not only is $B$ overloaded, $a$ is too. So an explanation has to contain "... To find out whether $b\in B$, we let $a=b$ and ask whether $a\in f(a)$. Now, $f(a)$, which is $f(b)$, is $\{a,c\}$, and that does not contain $a$ (because $a$ is $b$ and $b$ is not a member) -- yes yes it does contain $a$, but $a$ is not the value of $a$ anymore, so that doesn't matter ..." –  Henning Makholm Sep 7 '11 at 23:13 Not exactly on topic, but the OP's Gold badge to Reputation ratio must be the highest on the website! –  Ragib Zaman Sep 8 '11 at 4:12 To calculate $B$ systematically, I would recommend you to forget about the complete description for $f$; instead focus on one element $x$ and its image $f(x)$ at a time. One important thing to keep in mind is that $x$ is an element of the set $A = \{ a, b, c\}$, whereas $f(x)$ is a subset of $A$. So it is perfectly legitimate to ask whether $x \in f(x)$ or not. Let me do the example (a)(i) in full. • Take the element $a$. How do I know whether $a \in B$? The definition says it belongs to $B$ if and only if $a \not\in f(a)$. Now, consulting the function, we find that $f(a) = \{ a \}$. So we are interested in knowing if $a \in \{ a \}$ holds or not. This statement is indeed true; hence $a \in f(a)$ is also true. Therefore, from the definition of $B$, we conclude that $a$ is not present in $B$. • Now, for the element $b$, we have $f(b) = \{ a, c\}$. Now, the question is whether or not $b \in \{ a,c\} = f(b)$. This time, we have $b \not\in f(b)$. Hence $b \in B$. • Finally, for the element $c$, the image $f(c)$ is $\{ a,b,c \}$. Notice that $c$ is present in $\{ a,b,c \} = f(c)$. What does this tell you about the membership of $c$ in $B$? The remaining exercises involve a similar reasoning; can you take it from here? You are also asked to note that $B$ is not in the range $f$ in each case. Here, $f(A)$, the range of $f$, is a set containing subsets of $A$. For the above example, $$f(A) = \{ \{a\}, \{ a,c \}, \{ a,b,c \} \}.$$ Also $B$ is just a subset of $A$ (this is actually even more evident). So the exercise asks you to check that $B$ is not an element of $f(A)$. In the above example, $B = \{ b \}$, and it is easy to verify that $b \not\in f(A)$. - thank you, that definitely helps a lot. I still have to ask: in order to "note that $B$ is not in the range of the function" we then need to consider the range of the function to be $\{ \{ a\}, \{ a,c\} \{ a,b,c\}\}$ as I wrote in the question? And that's why $B = \{b\} \notin f(A)$? Is that correct? In other words $B$ is a set, but the important observation is that it is not an element of the range of the function? –  ghshtalt Sep 7 '11 at 23:26 by the way I hope that $f(A)$ is the correct notation for the range of the function. –  ghshtalt Sep 7 '11 at 23:27 Your solutions to exercise 1 are fine. You have the image of $A$ correct in 2 and yes, in the first function $a$ is getting mapped to $\{a\}$. Using your first function $A \to P(A), a \in f(a), b \not \in f(b), c \in f(c), \text{ so } B=\{b\}$. Then you are asked to notice that there is no $d \in A$ such that $f(d)=\{b\}$. You can check your other function the same way. This is setting up for Cantor's diagonal proof that $|A| \lt |P(A)|$ Added: for Exercise 2 it is true that $f$ goes from elements of $A$ to subsets of $A$. So $a \to \{a\}$, which is why we can ask if $a \in f(a)$. referring to the function in Example (a)(i), $c$ is an element of $\{a,b,c\}=f(c)$, which is why $c$ is not an element of $B$. For example (a)(ii), $c \in \{a,c\}$, so $c \not \in B$ but the other two are in $B$. - Sorry, those weren't actually my solutions, I've updated the question to hopefully make that more clear... –  ghshtalt Sep 7 '11 at 23:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9424777030944824, "perplexity": 136.945225512911}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928831.69/warc/CC-MAIN-20150521113208-00220-ip-10-180-206-219.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/planet-velocity.245188/
# Homework Help: Planet velocity 1. Jul 15, 2008 1. The problem statement, all variables and given/known data At perihelion a planet in another solary system is 175 x 10 ^ 6km from its Sun and is traveling at 40km/s. At aphelion it 250 x 10 ^ 6 km distant and is traveling at? 2. Relevant equations IiWi = IfWf 3. The attempt at a solution What do i do? 2. Jul 15, 2008 ### Dick Use conservation of angular momentum as you are trying to do. What are I and W in terms of mass m, velocity v and radius r of the planet?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9103825688362122, "perplexity": 1668.5708396015063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589618.52/warc/CC-MAIN-20180717070721-20180717090721-00325.warc.gz"}
http://slideplayer.com/slide/781273/
# Dynamic Models. ## Presentation on theme: "Dynamic Models."— Presentation transcript: Dynamic Models Introduction Assess the partial adjustment model as an example of a dynamic model Examine the ARDL model as a means of testing the partial adjustment model Discuss the Sharpe-Lintner model Derive the Error Correction Model from the ARDL model. This type of dynamic model is part of the Koyck distribution class of models. It is used in models where adjustment does not occur immediately, but takes a number of time periods to fully adjust. We can apply a specific restriction to a general ARDL model to determine if partial adjustment is taking place This model has as its dependent variable a desired value or planned value. This desired value is then determined by the usual explanatory variables: The partial adjustment mechanism takes the following form, where the change in y is equal to the difference between the desired value of y and its actual value in the previous time period: If λ is 0, it means there is no adjustment from one time period to the next. If λ is 1, it means there is immediate adjustment from one time period to the next. To produce an estimating equation, we need to substitute in this adjustment mechanism, to have an expression in terms of y as the dependent variable rather than y*. Estimating Equation The estimating equation, where y is the dependent variable, following the process of substitution is: ARDL Model The estimating equation is the same as the conventional ARDL model, including a specific restriction. The restriction is that the coefficient on the x(t-1) variable is equal to 0. Where the ARDL model is: In addition the estimated coefficients on the other variables can be used to produce estimates of the coefficients in the original model with the ‘desired’ variable. Based on the previous ARDL model: An example of the partial adjustment model is Lintner’s Dividend-Adjustment Model. The model suggests that dividends are related to company profits, but when profits rise, dividends do not rise in the same proportion immediately. Lintner suggests that firms have a long-run desired/ target pay-out ratio between dividends and profits, in which the dividend payout relative to profits is a desired level. Lintner Model Lintner then estimated the following model for the US: Lintner Model The coefficient on D(t-1) is equal to (1-λ)=0.70. This means that the speed of adjustment λ=(1-0.70)=0.30. This suggests adjustment is relatively slow. The coefficient on π is equal to (β λ), so β =0.15/0.30= This gives a value for the payout ratio of 0.5. Error Correction Models The error correction model is a short-run dynamic model, consisting of differenced variables, except the error correction term. The error correction term reflects the difference between the dependent and explanatory variable, lagged one time period. This model can incorporate a number of lags on both the dependent and explanatory variables. Error Correction Models As with the partial adjustment model, the error correction model (ECM) can be interpreted as a general ARDL model, in which a specific restriction is applied. The model can be derived from the basic ARDL model: Error Correction Model To turn the ARDL model into the ECM involves the following: - Subtract y(t-1) from both sides of the ARDL equation. - Add β(2)x(t-1) and subtract the same amount from the right hand side of equation. - Collect terms. Error Correction Model The previous rearrangements produces the following model: Error Correction Model Finally to produce an ECM, the following restriction needs to be applied to the lagged level variables: Error Correction Term The ECM is usually written in the following form, where the parameter τ is the error correction term coefficient. Long-Run In the long-run, we assume the Δy and the Δx variables (all variables in logs) grow at a constant rate of g. This produces the following expression: Long-run If we assume the relationship between x and y is of the form: y*=k.x*, taking logs gives: log(y*)=log(k)+log(x*). To find k we need to antilog the previous expression: ECM The following estimates were produced for an ECM using 60 observations (All variables in logs), where the long-run growth rate of Δx and Δy is 0.02: ECM This produces the following long-run relationship between x and y, which takes the form y = k. x : ECM The ECM is used to model the short-run in many situations and is closely associated with cointegration (we will cover this later). The model usually also includes lagged variables in addition to the error correction term. Conclusion The Partial Adjustment model is a version of the Koyck distribution It provides a theoretical reason for the inclusion of lags in a model. The Error Correction Model (ECM) can be derived from an ARDL model. The estimates from an ECM can also be used to determine the long-run relationship between variables.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9005151391029358, "perplexity": 1088.0512099463444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818693866.86/warc/CC-MAIN-20170925235144-20170926015144-00280.warc.gz"}
http://requestforlogic.blogspot.com/2011_08_01_archive.html
## Wednesday, August 24, 2011 ### Holey Data, Postscript: Hole Abstraction I love the PL community on Twitter. Yesterday, David Van Horn presented this observation: Most papers in computer science describe how their author learned what someone else already knew. -- Peter Landin I'm gonna be perfectly honest: I was specifically thinking of the holey data series when I responded. @lambda_calculus Presumably the implication is that this is bad for the advancement of science. OTOH, it's a pretty viable model for a blog. 'Cause really, I was kind of surprised that I hadn't found any previous presentations of the idea (and was therefore encouraged when Conor indicated that he at least sort of had thought of it too). Chung-chieh Shan's response was great: @simrob @lambda_calculus I'm not sure that Landin intended to imply that it's bad! I guess I hope he's right (or at least that I am), because in the comments to Part 3, Aaron Turon correctly observes that I missed a whopper: Minamide's 1998 POPL Paper "A Functional Representation of Data Structures with a Hole." Horray for blogs, I guess: if this has been a paper review, I would have been quite embarrassed, but as a blog comment I was delighted to find out about this related work. ## A Functional Representation of Data Structures with a Hole Minamide's paper effectively covers the same ground I covered in Part 1 of the Holey Data series: his linear representational lambdas are called hole abstractions. It's a very well written paper from the heady days when you could talk about the proposed Standard ML Basis Library and it was still common for authors to cite Wright and Felleisen when explaining that they were proving language safety by (what amounts to) proving progress and preservation lemmas. My favorite part of reading the paper was that it simultaneously confirmed two suspicions that I picked up after a recent discussion with Dan Licata: 1. Levy#/call-by-push-value was an unnecessarily restrictive calculus for programming with linear representational functions - ML would work just fine. 2. By using call-by-push-value, I'd avoided certain red herrings that would have plagued the development otherwise - somewhere on almost every page I thought "yep, that explanation is less clear than it could be because they don't know about difference between value (positive) types and computation (negative) types yet." One neat thing this paper describes that I hadn't thought about is automatically transforming non-tail-recursive programs that are naturally structurally inductive to tail-recursive programs based on hole abstractions/difference lists/linear representational functions. It seems like this optimization in particular is where many of the paper's claimed performance gains are found. It also seems like zippers-as-derivatives are pretty clearly lurking around the discussion in Section 4, which is neat. Overall, this paper made me quite happy - it suggests there is something canonical about this idea, and proves that the idea leads to concrete performance gains. It also made me sad, of course, because it seems like the ideas therein didn't really catch on last time around. But that's part of why I didn't stop with Part 1 in the series: it's not clear, even to me, that hole abstractions/difference lists/linear representational functions are worth adding to a functional programming language if all they can do is be applied and composed. However, with the additional expressiveness that can be found in pattern matching against hole abstractions (hell, that's a way better name than "linear representational functions," I'm just going to call them hole abstractions from now on), I think there's a significantly stronger case to be made. I've thrown a transliteration of Minamide's examples up on GitHub. The binary tree insertion example, in particular, could come from Huet's paper: it's a beautiful example of how even the "Part 1" language can implement zipper algorithms so long as you never need to move up in the tree. As for the hfun_addone function, I don't really understand it at all, I just transliterated it. In particular, it seems to not be entirely tail recursive (in particular, it seems to regular-recurse along the leftmost spine of the binary tree - if I'm not mistaken about this, I accuse line 131 of being the culprit.) ### 6.3 Logic Variable (Note: I've tried to make sure this section doesn't come across as mean-spirited in any way, but I want to emphasize: this is not intended to be any sort of criticism of Yasuhiko Minamide. His paper was terrific! Go read it!) My other favorite part of reading the paper was Section 6.3, which discusses difference lists. I plan to actually email Minamide and ask him about Section 6.3. Here's my reconstruction of events: Fictional Minamide has a great idea, implements it into a prototype ML compiler, tests it. His paper has strong theoretical and practical results, and gets accepted to POPL 1998. However, one of the reviewers says "oh, this is almost exactly difference lists in logic programming, you need to explain the connection." Fictional Minamide is not a logic programming person, has no access to a logic programming person, doesn't particularly have any reason to care about logic programming, but he does feel the need to address the concerns of a reviewer that he can't actually communicate with. Fictional Minamide manages to find, with great difficulty in the pre-Google-Scholar world, some poorly expressed explanation for what difference lists are in some random papers that are mostly about something else.1 After a couple of hours, Fictional Minamide gives up, exclaiming "okay, this makes absolutely no sense whatsoever," and writes something kinda mumbly about graph reduction that seems vaguely plausible to satisfy the reviewer's demand. The result is a section that mostly misses the point about the connection between hole abstraction and difference lists, both of which are declarative abstractions that allow a programmer to think about working modulo an uninitialized pointer in memory (though the way Minamide and I do it, the types help you way more). This is not intended as any criticism of either Real Minamide or Fictional Minamide. Indeed, it's mostly a criticism of the conference review processes: I'm pretty sure you could find similar "Section 6.3"s in my papers as well!2 I do hope, however, that my exposition in Part 1, which was in fact motivated by difference lists and reached more-or-less the exact same setup that Minamide came up with, clarifies the record on how these two ideas are connected. [Update Aug 25] I heard back from Real Minamide - while it was a long time ago, he did recall one of the reviewers mentioning logic variables, leading to the inclusion of that section. I win! Well, kind of. There was probably no "this makes no sense" exclamation; Minimade says that at the time his understanding at the time was in line with my comment about working modulo an uninitialized pointer. The comments about graph reduction, which were why I thought the section misses the point, were more of a side comment. Minamide also remembered another wonderfully tantalizing tidbit: he recalls that, at POPL 1998, Phil Wadler said he'd seen a similar idea even earlier. Perhaps hole abstraction is just destined to be continuously reinvented until it finally gets included in the C++2040 standard.3 1 The work he cites is on the adding the logic programming notions of unbound variables to functional programming languages, which (while I haven't looked at them) certainly don't look they would give a good introduction to the simple-but-goofy logic programming intuitions behind difference lists. That said, I basically have never seen a good, clear, self-contained description of what a difference list is - I consider Frank's logic programming notes to be pretty good, but I recognize that I'm not a fair judge because I was also present at the associated lecture. 2 Grep for "∇", or "nominal", especially in anything prior to my thesis proposal, if you want a head start. 3 I wonder what brackets they'll use, since as of the current standard they seem to have run out. ## Monday, August 22, 2011 ### Holey Data, Part 3/3: The Type Of A One-Hole Context Let's review: • As a prelude, I introduced the Levy# language, Bauer and Pretnar's implementation of call-by-push-value, a programming formalism that is very persnickety about the difference between value code and computation code, extended with datatypes. I have come to the position, especially after a conversation with Dan Licata, that we should really think of Levy# as an A-normal form intermediate language that is explicit about control: using it allowed me to avoid certain red herrings that would have arisen in an ML-like syntax, but you'd really want the ML-like language to actually program in. • In Part 1, I showed how representational linear functions could capture a logic programming idiom, difference lists. Difference lists can be implemented in ML/Haskell as functional data structures, but they don't have O(1) append and turn-into-a-regular-list operations; my Levy# difference lists, represented as values of type list -o list, can perform all these operations in constant time with local pointer manipulation. The cost is that operations on linear functions are implemented by destructive pointer mutation - I should really have an affine type system to ensure that values of type list -o list are used at most once. • In Part 2, I observed that the perfectly natural, λProlog/Twelf-inspired operation of matching against these linear functions greatly increased their expressive power. By adding an extra bit of information to the runtime representation of data (an answer to the the question "where is the hole, if there is a hole?"), the matching operation could be done in constant time (well, time linear in the number of cases). A simple subordination analysis meant that checking for exhaustiveness and redundancy in case analysis was possible as well. In the case study in Part 2, I motivated another kind of pattern matching, one foreign to the λProlog/Twelf way of looking at the world but one intimately connected to the famous functional data structure known as The Zipper. In this last installment, I show how linear representational functions can completely subsume zippers. ## The essence of zippers A zipper is a functional data structure that allows you to poke around on the inside of a datatype without having to constantly descend all the way int the datatype. Illustrations of zippers generally look something like this:1 The zipper data structure consists of two parts: one of them is a "path" or "one-hole-context", which, owing to Conor's influence [1], I will call a derivative, and the other is a subterm. We maintain the invariant that as we move into the derivative ("up") or into the subterm ("down") with constant-time operations, the combination of the derivative and the subterm remains the original term that we are editing. A Levy# implementation of derivative-based zippers as presented in Huet's original paper on zippers [2] can be found in thezipper.levy. To understand linear function-based zippers, we see that a linear term a -o b is kind of obviously a b value with one a value missing, so if we pair a value a -o b with a value of type a, we would hope to be able to use the pair as a zipper over b values. The one thing that we can't do with the linear functions we've discussed so far, however, is look "up" in the tree. A list is a good simple example here, since the two parts of a linear function-based zipper over a list are a difference list (the beginning of the list) and a list (the end of the list). Looking "up" involves asking what the last item in the list -o list is (if there is at least one item). Since a value of type list -o list has the structure [hole: list] Cons n1 (Cons n2 ... (Cons nk-1 (Cons nk hole))...), it's reasonable to match this term against the pattern [hole: list] outside (Cons i hole) to learn that outside is [hole: list] Cons n1 (Cons n2 ... (Cons nk-1 hole)...) and that i is nk. The key is that, in the pattern, the linear variable hole appears as the immediate subterm to the constructor Cons, so that we're really only asking a question about what the closest constructor to the hole is. This kind of interior pattern matching is quite different from existing efforts to understand "pattern fragments" of linear lambda cacluli (that I know of), but it's perfectly sensible. As I'll show in the rest of this post, it's also easy enough to modify the implementation to deal with interior pattern matching efficiently, easy enough to do exhaustiveness checking, and easy enough to actually use for programming. For the last part, I'll use an example from Michael Adams; you can also see the linear pattern matching version of Huet's original example on Github: (linzipper.levy). ## Implementation The only trick to implementing zippers that allow interior pattern matching is to turn our linked list structure into a doubly linked list: the final data structure basically is a double-ended queue implemented by a doubly-linked list. First: we make our "tail pointer," which previously pointed into the last structure where the hole was, a pointer to the beginning of the last allocated cell, the one that immediately surrounds the hole. This lets us perform the pattern match, because we can see what the immediate context of the hole is just by following the tail pointer. (It also requires that we create a different representation of linear functions that are the identity, but that's the kind of boring observation that you can get by implementing a doubly-linked list in C.) After we have identified the immediate context of the hole, we also need a parent pointer to identify the smaller "front" of the linear function. The in-memory representation of [hole: list] Cons 9 (Cons 3 (Cons 7 hole)), for example, will look something like this: Just like the "where is the hole" information, this parent pointer gets to be completely ignored by the runtime if it hangs around after the linear function is turned into a non-holey value. ## Exhaustiveness checking Providing non-exhaustive match warnings for case analysis on the inside of a linear function is a bit tricker than providing non-exhaustive match warnings for case analysis on the outside of a linear function, but it's only just a bit trickier. First, observe that the subordination relation I talked about in the previous installment is the transitive closure of an immediate subordination relation that declares what types of values can be immediate subterms to constructors of other types. For instance, when we declare the type of the constructor Cons to be int -o list -o list, Levy# learns that int and list are both immediately subordinate to list. Levy# was already tracking both the immediate subordination relation and the full subordination relation; we can change the output of the "print subordination" command to reflect this reality: Levy. Press Ctrl-D to exit. Levy> data Z: even | EO: even -o odd | OE: odd -o even ;; data Z: even | EO: even -o odd | OE: odd -o even Levy> $subord ;; Subordination for current datatypes: even <| even even <| odd (immediate) odd <| even (immediate) odd <| odd Now, the way we enumerate the possible cases for an interior pattern match against a linear value of type holeTy -o valueTy is the following: • Enumerate all of the types ctxTy that holeTy is immediately subordinate to. These are all the types with constructors that might immediately surround the hole (which, of course, has type holeTy). • Filter out all the types ctxTy above that aren't subordinate to, or the same as, the value type valueTy. • For every constructor of the remaining types ctxTy, list the positions where a value of type holeTy may appear as an immediate subterm. Using the immediate subordination relation is actually unnecessary: alternatively, we could just start the filtering step with all types subordinate (or equal) to valueTy. I think the use of the immediate subordination relation is kind of neat, though. ## Case Study: No, Really, Scrap Your Zippers When I said that this idea had been kicking around in my head since shortly after WGP 2010, I meant specifically since the part of WGP 2010 when Michael Adams presented the paper "Scrap Your Zippers". One of the arguments behind the Scrap Your Zippers approach is that Huet's zippers involve a lot of boilerplate and don't work very well for heterogenerous datatypes. Small examples tend to minimize the annoyingness of boilerplate by virtue of their smallness, but the example Adams gave about heterogeneous zippers works (almost) beautifully in our setting. First, we describe the datatype of departments, which are pairs of an employee record (the boss) and a list of employees (subordinates): data E: name -o int -o employee ;; data Nil: list | Cons: employee -o list -o list ;; data D: employee -o list -o dept ;; val agamemnon = E Agamemnon 5000 ;; val menelaus = E Menelaus 3000 ;; val achilles = E Achilles 2000 ;; val odysseus = E Odysseus 2000 ;; val dept = D agamemnon (Cons menelaus (Cons achilles (Cons odysseus Nil))) ;; Our goal is to zipper our way in to the department value dept and rename "Agamemnon" "King Agamemnon." The reason this is only almost beautiful is that Levy# doesn't have a generic pair type, and the type of zippers over a heterogeneous datatype like dept is a pair of a linear representational function T -o dept and a value T for some type T. So, we need to come up with specific instances of this as datatypes, the way we might in Twelf: data PD: (dept -o dept) -o dept -o paird ;; data PE: (employee -o dept) -o employee -o paire ;; data PN: (name -o dept) -o name -o pairn ;; Given these pair types, we are able to descend into the structure without any difficulty: # Create the zipper pair val loc = PD ([hole: dept] hole) dept ;; # Move down and to the left comp loc = let (PD path dept) be loc in let (D boss subord) be dept in return PE ([hole: employee] path (D hole subord)) boss ;; # Move down and to the left comp loc = let (PE path employee) be loc in let (E name salary) be employee in return PN ([hole: name] path (E hole salary)) name ;; At this point, we could just do the replacement in constant time by linear application: comp revision = let (PN path name) be loc in return path KingAgamemnon ;; For comparison, the Scrap Your Zippers framework implements this with the function fromZipper. However, everything above could have been done in the previous installment. Our new kind of pattern matching reveals its power when we try to walk "up" in the data structure, so we'll do that instead. The first step takes us back to an employee -o dept zipper, and is where we give Agamemnon his kingly designation: comp loc = let (PN path name) be loc in let ([hole: name] path (E hole salary)) be path in return PE path (E KingAgamemnon salary) ;; The first step was easy: the only place a name hole can appear in an dept datatype is inside of an employee. An employee, however, can appear in a value of type dept in one of two paces: either as the boss or as one of the members of the list of subordinates. Therefore, if we want to avoid nonexhaustive match warnings, we have to give an extra case:2 comp revision = let (PE path employee) be loc in match path with | [hole: employee] D hole subord -> return D employee subord | [hole: employee] path (Cons hole other_subord) -> return dept ;; # Error? Note that, in the first match above, the case was [hole: employee] D hole subord, not [hole: employee] path (D hole subord). As in the previous installment, I used the subordination relation to conclude that there were no values of type dept -o dept other than the identity; therefore, it's okay to allow the simpler pattern that assumes path is the identity. The code for this example is in scrap.levy. ## Conclusion Proposals for radically new language features should be treated with some care; do they really add enough expressiveness to the language? I hope that this series has at least suggested the possibility that linear function types might be desirable as a core language feature in a functional language. Linear representational functions expand the class of safe, pure algorithms to capture algorithms that could previously only be done in an impure way, and they give a completely principled (and cast-free) way of scrapping your zipper boilerplate. And perhaps the most important feature of this proposal is one I haven't touched so far, which is the added simplicity of reasoning about programs, both informally and formally. As for the "informally," if you've ever been flummoxed by the process of making sure that your heavily tail-recursive program has an even number of List.rev errors, or if you have avoided making functions tail-recursive for precisely this reason, I think linear representational functions could be a huge help. One thing I'd like to do if I have time is to work through type inference in context with linear representational functions; I imagine many things would be much clearer. In particular, I suspect there wouldn't be any need for both Cons and Snoc lists or the "fish" operator. The formal reasoning aspect can be glimpsed by looking at the previous case study: proving that the tail-recursive functions lin_of_zip and zip_of_lin are inverses has the structure of a double-reverse theorem (proving that List.rev (List.rev xs) = xs). Maybe it's just me being dense, but I have trouble with double-reverse theorems. On the other hand, if we needed to prove that the structurally-inductive and not tail-recursive functions lin_of_zip' and zip_of_lin' (which we can now implement, of course) are inverses, that's just a straightforward induction and call to the induction hypothesis. And making dumb theorems easier to prove is, in my experience at least half the battle of using theorem provers. ### Expressiveness and efficiency, reloaded The claim that linear representational functions can completely subsume zippers is undermined somewhat by the emphasis I have put on making matching and other operations O(1). Just to be clear: I can entirely implement linear representational functions using functional data structures (in fact, by using derivatives!) if I'm not concerned with performance. There's even a potential happy medium: rather than invalidating a zipper, I think it would be possible to merely mark a zipper as "in use," so that the second time you use a zipper it gets silently copied. This means that if you think about affine usage, you get constant-time guarantees, but if you just want to use more-awesome zippers, you can program as if it was a persistent data structure. This "persistent always, efficient if you use it right" notion of persistant data structures has precedent, it's how persistent union-find works. 1 That particular image is a CC-SA licensed contribution by Heinrich Apfelmus on the Haskell Wikibook on zippers. Thanks, Heinrich! 2 If we had a polymorphic option type we could just return Some or None, of course. Meh. [1] Conor McBride, "The Derivative of a Regular Type is its Type of One-Hole Contexts," comically unpublished. [2] Gúrard Huet, "Function Pearl: The Zipper," JFP 1997. [3] Michael D. Adams "Scrap Your Zippers," WGP 2010. ## Monday, August 15, 2011 ### Holey Data, Part 2/3: Case Analysis on Linear Functions The previous installment was titled "(Almost) Difference Lists" because there was one operation that we could (technically) do on a Prolog difference list that we can't do on the difference lists described in the previous section. If a Prolog difference list is known to be nonempty, it's possible to match against the front of the list. Noone ever does this that I can tell, because if the Prolog difference list is empty, this will mess up all the invariants of the difference list. Still, however, it's there. We will modify our language to allow pattern matching on linear functions, which is like pattern matching on a difference list but better, because we can safely handle emptiness. The immediate application of this is that we have a list-like structure that allows constant time affix-to-the-end and remove-from-the-beginning operations: a queue! Due to the similarity with difference lists, I'll call this curious new beast a difference queue. This is all rather straightforward, except for the dicussion of coverage checking, which involves a well-understood analysis called subordination. But we'll cross that bridge when we come to it. A pattern matching aside. Pattern matching against functions? It's certainly the case that in ML we can't pattern match against functions, but as we've already discussed towards the end of the intro to Levy#, the linear function space is not a computational function space like the one in ML, it's a representational function space like the one in LF/Twelf, a distinction that comes from Dan Licata, Bob Harper, and Noam Zeilberger [1, 2]. And we pattern match against representational functions all the time in LF/Twelf. Taking this kind of pattern matching from the logic programming world of Twelf into the functional programming world is famously tricky (leading to proposals like Beluga, Delphin, (sort of) Abella, and the aforementioned efforts of Licata et al.), but the trickiness is always from attempts to open up a (representational) function, do something computational on the inside, and then put it back together. We're not going to need to do anything like that, luckily for us. ## Difference queues We're going to implement difference queues as values of type q -o q, where q is an interesting type: because definitions are inductive, the type q actually has no inhabitants. data QCons: int -o q -o q ;; An alternative would be to implement difference queues using the difference lists list -o list from the previous installment, which would work fine too. We'll also have an option type, since de-queueing might return nothing if the queue is empty, as well as a "valof" equivalent operation to force the queue from something that may or may not be a queue. This getq option will raise a non-exhaustive match warning during coverage checking, since it can obviously raise a runtime error. data None: option | Some: int -o (q -o q) -o option ;; val getq = thunk fun opt: option -> let (Some _ q) be opt in return q ;; The operations to make a new queue and to push an item onto the end of the queue use the functionality that we've already presented: val new = thunk return [x:q] x ;; val push = thunk fun i: int -> fun queue: (q -o q) -> return [x:q] queue (QCons i x) ;; The new functionality comes from the pop function, which matches against the front of the list. An empty queue is represented by the identity function. val pop = thunk fun queue: (q -o q) -> match queue with | [x:q] x -> return None | [x:q] QCons i (queue' x) -> return Some i queue' ;; Lets take a closer look at the second pattern, [x:q] QCons i (queue' x). The QCons constructor has two arguments, and because the linear variable x occurs exactly once, it has to appear in one of the two arguments. This pattern is intended to match the case where the linear variable x appears in the second argument of the constructor (read: inside of the the q part, not inside of the int part), and the pattern binds a linear function queue' that has type q -o q. You can see how these difference queues are used in linearqueue.levy. Another pattern matching aside. The above-mentioned patterns (and, in fact, all accepted patterns in this current extension of Levy#) actually come from the set of Linear LF terms that are in what is known as the pattern fragment (appropriately enough). The pattern fragment was first identified by Dale Miller as a way of carving out a set of unification problems on higher-order terms that could 1) always be given unitary and decidable solutions and 2) capture many of the actual unification problems that might arise in λProlog [3]. Anders Schack-Nielsen and Carsten Schürmann later generalized this to Linear LF [4], which as I've described is the language that we're essentially using to describe our data. ## Coverage checking with subordination In the previous section we saw that the two patterns [x:q] x and [x:q] QCons i (queue' x) were used to match against a value of type q -o q, and the coverage checker in Levy# accepted those two patterns as providing a complete case analysis of values of this type. But the constructor QCons has two arguments; why was the coverage checker satisfied with a case analysis that only considered the linear variable occurring in the second argument? To understand this, consider the pattern [x:q] x and [x:q] QCons (di x) queue', where the linear variable does occur in the first argument. This pattern binds the variable di, a linear function value of type q -o int. But the only inhabitants of type int are constants, and the q must go somewhere; where can it go? It can't go anywhere! This effectively means that there are no closed values of type q -o int, so there's no need to consider what happens if the q hole appears inside of the first argument to QCons. Because of these considerations, Levy# has to calculate what is called the subordination relation for the declared datatypes. Subordination is an analysis developed for Twelf that figures out what types of terms can appear as subterms of other types of terms. I added a new keyword to Levy# for reporting this subordination information: Levy>$subord ;; Subordination for current datatypes: int <| q q <| q int <| option So int is subordinate to both q and option, and q is subordinate only to itself. Subordination is intended to be a conservative analysis, so this means that there might be values of type int -o q and int -o option and that there might be non-identity values of type q -o q, but that there are no values of type q -o option and the only value of type option -o option is [x: option] x. Levy# uses the no-overlapping-holes restriction to make subordination analysis more precise; without this restriction, a reasonable subordination analysis would likely declare q subordinate to option.1 Some more examples of subordination interacting with coverage checking can be seen in linearmatch.levy. ### Subordination and identity in case analysis We use subordination data for one other optimization. The following function is also from linearmatch.levy; it takes a value of type int -o list and discards everything until the place where the hole in the list was. val run = thunk rec run: (int -o list) -> F list is fun x: (int -o list) -> match x with | [hole: int] Cons hole l -> return l | [hole: int] Cons i (dx hole) -> force run dx ;; Because Levy# is limited to depth-1 pattern matching, a pattern match should really only say that the hole is somewhere in a subterm, not that the hole is exactly at a subterm. This would indicate that the first pattern should really be [hole: int] Cons (di hole) l, but by subordination analysis, we know that int is not subordinate to int and so therefore the only value of type int -o int is the identity function, so di = [hole:int] hole and we can beta-reduce ([hole:int] hole) hole to get just hole. So subordination is a very helpful analysis for us; it allows us to avoid writing some patterns altogether (patterns that bind variables with types that aren't inhabited) and it lets us simplify other patterns by noticing that for certain types "the hole appears somewhere in this subterm" is exactly the same statement as "this subterm is exactly the hole." ## Implementation In order to efficiently pattern match against the beginning of a list, we need to be able to rapidly tell which sub-part of a data structure the hole can be found in. This wasn't a problem for difference lists and difference queues, since subordination analysis is enough to tell us where the hole will be if it exists, but consider trees defined as follows: data Leaf: tree | Node: tree -o tree -o tree ;; If we match against a value of type tree -o tree, we need to deal with the possibility that it is the identity function, the possibility that the hole is in the left subtree, and the possibility that the hole is in the right subtree. This means that, if we wish for matching to be a constant-time operation, we also need to be able to detect whether the hole is in the left or right subtree without doing a search of the whole tree. This is achieved by adding an extra optional field to the in-memory representation of structures, a number that indicates where the hole is. Jason Reed correctly pointed out in a comment that for the language extension described in the previous installment, there was actually no real obstacle to having the runtime handle multiple overlapping holes and types like tree -o (tree -o tree). But due to the way we're modifying the data representation to do matching, the restriction to having at most one hole at a time is now critical: the runtime stores directions to the hole at every point in the structure. The memory representation produced by the value code [hole: tree] Node (Node Leaf Leaf) (Node (Node hole Leaf) Leaf) might look something like this, where I represent the number indicating where the hole is by circling the indicated hole in red: If the hole is filled, the extra data (the red circles) will still exist, but the part of the runtime that does operations on normal inductively defined datatypes can just ignore the presence of this extra data. (In a full implementation of these ideas, this would likely complicate garbage collection somewhat.) ## Case Study: Holey Trees and Zippers The binary trees discussed before, augmented with integers at the leaves, are the topic of this case study. A famous data structure for functional generic programming is Huet's zipper [5, 6], which describes inside-out paths through inductive types such as trees. The idea of a zipper is that it allows a programmer to place themselves inside a tree and move up, left-down, and right-down the tree using only constant-time operations based on pointer manipulation. The zipper for trees looks like this: data Top: path | Left: path -o tree -o path | Right: tree -o path -o path ;; In this case study, we will show how to coerce linear functions tree -o tree into the zipper data structure path and vice versa. In order to go from a linear function tree -o tree to a zipper path, we use a function that takes two arguments, an "outside" path and an "inside" tree -o tree. As we descend into the tree-with-a-hole by pattern matching against the linear function, we tack the subtrees that aren't on the path to the hole onto the outside path, so that in every recursive call the zipper gets bigger and the linear function gets smaller. val zip_of_lin = thunk rec enzip: path -> (tree -o tree) -> F path is fun outside: path -> fun inside: (tree -o tree) -> match inside with | [hole: tree] hole -> return outside | [hole: tree] Node (left hole) right -> force enzip (Left outside right) left | [hole: tree] Node left (right hole) -> force enzip (Right left outside) right ;; Given this function, the obvious implementation of its inverse just does the opposite, shrinking the zipper with every recursive call and tacking the removed data onto the linear function: val lin_of_zip = thunk rec enlin: path -> (tree -o tree) -> F (tree -o tree) is fun outside: path -> fun inside: (tree -o tree) -> match outside with | Top -> return inside | Left path right -> force enlin path ([hole: tree] Node (inside hole) right) | Right left path -> force enlin path ([hole: tree] Node left (inside hole)) ;; That's the obvious implementation, where we tack things on to the outside of the linear function. Linear functions have the property, of course, that you can tack things on to the inside or the outside, which gives us the opportunity to consider another way of writing the inverse that looks more traditionally like an induction on the structure of the zipper: val lin_of_zip' = thunk rec enlin: path -> F (tree -o tree) is fun path: path -> match path with | Top -> return [hole: tree] hole | Left path right -> force enlin path to lin in return [hole: tree] lin (Node hole right) | Right left path -> force enlin path to lin in return [hole: tree] lin (Node left hole) ;; These functions, and examples of their usage, can be found in linear-vs-zipper.levy. This installment was written quickly after the first one; I imagine there will be a bigger gap before the third installment, so I'm going to go ahead and say a bit about where I'm going with this, using the case study as motivation. I wrote three functions: • lin_of_zip turns zippers into linear functions by case analyzing the zipper and tacking stuff onto the "beginning" our outside of the linear function, • lin_of_zip' turns zippers into linear functions by inducting over the path and tacking stuff onto the "end" or inside of the linear function, and • zip_of_lin turns linear functions into zippers by case analyzing the "beginning" or outside of the linear function and tacking stuff on to the zipper. What about zip_of_lin', which turns linear functions into zippers by case analyzing the "end" or inside of the linear function? It's easy enough to describe what this function would look like: # val zip_of_lin' = thunk rec enzip: (tree -o tree) -> F path is # fun lin: (tree -o tree) -> # match lin with# | [hole: tree] hole -> return Top# | [hole: tree] lin' (Node hole right) -> # force enzip lin' to path in# return Left path right# | [hole: tree] lin' (Node left hole) -># force enzip lin' to path in# return Right left path ;; Toto, we're not in the pattern fragment anymore, but if we turn the representation of linear functions into doubly linked lists (or a double-ended-queues implemented as linked lists, perhaps), I believe we can implement these functions without trouble. At that point, we basically don't need the zippers anymore: instead of declaring that the derivative of a type is the type of its one-hole contexts, we can make the obvious statement that the linear function from a type to itself is the type of one-hole contexts of that type, and we can program accordingly: no new boilerplate datatype declarations needed! ## Conclusion A relatively simple modification of the runtime from the previous installment, a runtime data tag telling us where the hole is, allows us to efficiently pattern match against linear representational functions. This modification makes the use of linear representational functions far more general than just a tool for efficiently implementing a logic programming idiom of difference lists. In fact, I hope the case study gives a convincing case that these holey data structures can come close to capturing many of the idioms of generic programming, though that argument won't be fully developed until the third installment, where we move beyond patterns that come from the Miller/Schack-Nielsen/Schürmann pattern fragment. More broadly, we have given a purely logical and declarative type system that can implement algorithms that would generally be characterized as imperative algorithms, not functional algorithms. Is it fair to call the queue representation a "functional data structure"? It is quite literally a data structure that is a function, after all! If it is (and I'm not sure it is), this would seem to challenge at least my personal understanding of what "functional data structures" and functional algorithms are in the first place. 1 This is true even though there are, in fact, no closed values of type q -o option even if we don't have the no-overlapping-holes restriction (proof left as an exercise for the reader). [1] Daniel R. Licata, Noam Zeilberger, and Robert Harper, "Focusing on Binding and Computation," LICS 2008. [2] Daniel R. Licata and Robert Harper, "A Universe of Binding and Computation," ICFP 2009. [3] Dale Miller, "A Logic Programming Language with Lambda-Abstraction, Function Variables, and Simple Unification," JLC 1(4), 1991. [4] Anders Schack-Nielsen and Carsten Schürmann, "Pattern Unification for the Lambda Calculus with Linear and Affine Types," LFMTP 2010. [5] Gúrard Huet, "Function Pearl: The Zipper," JFP 1997. [6] Wikipedia: Zipper (data structure). ## Friday, August 12, 2011 ### Holey Data, Part 1/3: (Almost) Difference Lists This series has been brewing in my head since a year ago, starting at the 2010 Workshop on Generic Programming in Baltimore. It was at least a quarter-baked idea by the time I asked, on the CSTheory Stackexchange, a question that amounted to "are there functional programming difference lists that act like logic programming difference lists"? [1] ## Difference lists Difference lists, which are deep lore of the logic programming community, are essentially data structures that allow you to append to the beginning and the end of the list as a constant time operation; they can also be turned back into "regular" lists as a constant time operation. You can see [2,3,4] for details, but I'd almost encourage you not to: logic programming difference lists are a bit crazy. Let me give you two ways of thinking about difference lists. If you're a lazy person (in the Haskell sense), you may say that difference lists really have something to do with laziness in functional programming languages. William Lovas showed me some crazy Haskell code yesterday that does something of this flavor, and there's a paper that espouses this viewpoint [5]. I'll stick with a different viewpoint: difference lists are functions from lists to lists. This is how difference lists are implemented in the Haskell Data.DList library [6], and you can find people that back-ported the idea that difference lists are "really" functions to higher-order logic programming languages [7]. The problem with the idea of difference lists being functions is that you lose one of the fundamental properties that you started with: while it depends on how you implement things, difference lists implemented as functions will usually have a O(1) append operation but a O(n) "turn me into a regular list" operation. For a discussions of why, see the answers to my question on CSTheory Stackexchange. I've believed for some time that this unfortunate O(n)ness could be overcome, but I wanted a working proof-of concept implementation before I talked about it. It was during my recent visit to France that I realized that I really needed to base the proof-of-concept off of a language that was appropriately persnickety about the difference between what I called "value code" and what I called "computation code" in the previous post. And then I found Levy, and embraced and extended it into Levy#.1 So at this point you should read the previous post. ## Representing datatypes In the last post I talked about declaring and using datatypes in Levy#, using the example of lists of integers: data Nil: list | Cons: int -o list -o list It's necessary to say a few words about how these things are implemented. Every value in Levy# is represented in the OCaml interpreter as a heapdata ref - that is, a mutable pointer into a "box" in memory. Some boxes hold integers, other boxes hold the code and environments that represent suspended (thunked) computations. Declared constants from datatype declarations are another kind of box which holds both a tag (like Nil or Cons) and an appropriate number of values (that is, pointers) for the given datatype. As an example, here's the way the list Cons 3 (Cons 7 Nil), stored in the variable xs, could be laid out in memory: ## Representing difference lists ### Types On the level of types, a difference list will be represented as a member of the value type (the Linear LF type) list -o list. This adequately represents what a difference list is better than the list -> list type of Haskell or Lambda Prolog/Twelf. In Haskell, lots of functions have type list -> list, including the list reversal function. Lambda Prolog/Twelf does better; the LF type list -> list really has to be substitution function - it can only take a list and plug it into designated places in another list. But λx. Nil is a perfectly good LF value of type list -> list. In other words, it's not necessarily the case that applying something to a function is equivalent to tacking that thing on to the end of the difference list - the applied argument can simply be ignored. Linear functions must use their argument exactly once, so linear functions list -o list accurately represent the idea of difference lists, a list with exactly one missing list somewhere in it. For lists this is kind of a dumb way to put it: the missing list has to go at the end of a series of Conses or nowhere at all, so the linearity forces there to be one (as opposed to zero) occurrences of the hole. If we were talking about terms of type tree -o tree, then linearity would enforce that there was one (as opposed to zero or two or three...) occurrences of the hole. We'll come back to that later... ### Terms Recall from the previous post that Levy# starts off syntactically precluded from forming values of this type list -o list; we'll have to fix that by adding new syntax. On the level of syntax, a difference list will be represented using a Twelf-like notation where "λ x:τ.v" is written as [x:tau] v. Example The difference list containing first 9 and then 4 will be represented as [hole: list] Cons 9 (Cons 4 hole); let's call that difference list dxs. Let dys be the some other piece of value code with type list -o list, and let xs be the example difference list Cons 3 (Cons 7 Nil) from the previous section. Then dxs xs appends the difference list and regular list by filling in the hole with xs; the result is Cons 9 (Cons 4 (Cons 3 (Cons 7 Nil))). Similarly, [hole: list] dxs (dys hole) is a piece of value code that forms a new difference list by appending the two smaller difference lists. In terms of the expressive power of these difference lists, that's pretty much the story: the file linear.levy has some more examples, which can be run in the "holey-blog-1" tagged point in the repository. A Linear Logic Aside. This, perhaps, clears up the reason I used linear implication to define constructors like Cons. Say instead of Cons we had ConsBad, a Linear LF constant of type int -> list -> list. That type is equivalent in Linear LF to a term of type !int -o !list -o list. The exclamation points ("bangs") prevent the occurrence of a linear variable in either argument to ConsBad, so [hole: list] ConsBad 9 (ConsBad 4 hole) would NOT be a (piece of value code)/(Linear LF term) of type list -o list. ### Memory representation So far, I've explained how difference lists can be represented as linear functions, but what I originally promised was difference lists with O(1) append and turn-into-a-regular-list operations. To do this, I need to explain how linear functions can be represented in memory. A linear function is represented as another kind of box in memory, one with two pointers. The first pointer acts like a regular value - it points to the beginning of a data structure. The second pointer acts like a pointer into a data structure: it locates the "hole" - the location of the linear variable. This is the in-memory representation of our example [hole: list] Cons 9 (Cons 4 hole) from above, stored as the variable dxs. If we want to apply this linear function to a value and thereby insert a list into the hole, we merely have to follow the second hole-pointer and write to it, and then return the first value-pointer as the result. Voilà, application with only local pointer manipulation! If the result of applying xs to dxs was stored as the variable ys, the state of memory would look something like this: The catch is that the linear function box that dxs refers to no longer can be used in the same way: if we tried to write something different to the hole pointer, disaster could result: it would change the value of ys! Therefore, the difference list, and all linear function values, are use-only-once data structures, and the current implementation invalidates used linear function values (shown by the red X in the figure above) and will raise a runtime error if they are used twice. This could be precluded with a type system that ensured that values whose type was a linear function are used, computationally, in an affine way - that is, used at most once. Work on affine type systems is interesting, it's important, I've written about it before, but I see it as an orthogonal issue. That's all I really have to say about that. And that's how we implement linear2 representational functions with constant time composition and application, which corresponds to implementing different lists with O(1) append and turn-into-a-regular-list operations. (Disclaimer: this section is a bit of an idealization of what's actually going on in the OCaml implementation, but it's honest enough in the sense that everything's really implemented by local pointer manipulation, and represents what I think you'd expect an complied implementation of this code to do.) ### Expressiveness and efficiency One cost of our representation strategy is that we can only use values whose type is a linear function in an affine way. Another cost is, because the implementation really thinks in terms of "the hole," we can't have two-hole structures like tree -o tree -o tree. (Edit: Jason, in the comments, points out that I may be wrong about more-than-one-hole-structures being problematic.) But, if we were interested primarily in expressiveness and were willing to sacrifice constant-time composition and application guarantees, then it would absolutely be possible to have reusable difference lists and linear types representing n-holed contexts for any n. ## Case Study: The Dutch National Flag Problem William Lovas proposed this example, and I think it's great: the Dutch National Flag problem was proposed by Edsger Dijkstra as a generalization of what happens in a Quicksort partition step. The logic/functional programming variant of this problem is as follows: you have a list of red, white, and blue tagged numbers. You want to stably sort these to put red first, white second, and blue third: in other words, if the ML function red filters out the sublist of red-tagged numbers and so on, you want to return red list @ white list @ blue list. That's easy: the catch is that you can't use append, because you're only allowed to iterate over the initial unpartitioned list, and you're only allowed to iterate over that list one time. The solution in Levy# extended with difference lists is to pass along three difference lists representing the partitioned beginning of the full list and a regular list representing the un-partitioned end of the full list. At each step, you pick an item off the un-partitioned list and (constant-time) append it on to the end of the appropriate difference list. When there's nothing left in the un-partitioned list, you (constant-time) append the three difference lists while (constant-time) turning the concatenated difference lists into a normal list. val dutch' = thunk rec loop : (list -o list) -> (list -o list) -> (list -o list) -> list -> F list is fun reds : (list -o list) -> fun whites : (list -o list) -> fun blues : (list -o list) -> fun xs : list -> match xs with | Nil -> return (reds (whites (blues Nil))) | Cons x xs -> (match x with | Red i -> force loop ([hole: list] reds (Cons x hole)) whites blues xs | White i -> force loop reds ([hole: list] whites (Cons x hole)) blues xs | Blue i -> force loop reds whites ([hole: list] blues (Cons x hole)) xs) ;; This solution is in the repository in the file dutch.levy. ## Conclusion I think that the incorporation of linear representational functions as difference lists is beautiful. I've written plenty of Standard ML where I've done tail-recursive manipulations on lists and so either 1) had to keep track of whether I'd called List.rev the right number of times or 2) worried whether all these append operations would end up hurting efficiency. This solution gives you the append functions that you need in a way that makes it easy to think about both the order of items and the runtime cost. And it's all enabled by some very pretty types! The real straw that broke my back and led to this line of thinking, by the way, was an issue of expressiveness, not efficiency: I was trying to reason about such code that did tail-recursive operations on lists in Agda. That's another story, but it's one that's connected to my thesis in an important way, so I will try to get around to writing it. In the next installment of the Holey Data series, I'll show how we can pattern match against difference lists in Levy#, which turns linear representational functions from (almost) difference lists into something significantly awesomer than difference lists. (P.S. Thanks Paul a.k.a. Goob for sneaking me into the English Department's cluster to make the illustrations for this blog post.) 1 Originally I wanted to base it off of a Noam/Dan/Bob style language based on focusing in polarized logic, but it turns out that focused logic isn't a very natural language to actually program in. I owe a debt to Andrej and Matija for going through the legwork of making CBPV into an implemented toy language that it was possible to play around with. 2 Actually, this representation strategy might work fine not just for linear functions but for regular-old LF substitution functions with zero, one, two... arguments. When I tried to work this out with William Lovas, we tentatively concluded that you might need some sort of union-find structure to make sure that all the holes were effectively pointers to "the same hole." In this note, I only motivated linear functions in terms of adequately representing difference lists; the real beauty of using difference lists starts coming into play with the next two posts in this series. [1] Robert J. Simmons, "Difference Lists in Functional Programming", CSTheory StackExchange. [2] Wikibook on Prolog Difference Lists. [3] Dale Miller's implementation of (classic, logic-programmey) difference lists: "Difference Lists provide access to the tail of a list". [4] Frank Pfenning's course notes: "Difference Lists". [5] Sandro Etalle and Jon Mountjoy. "The Lazy Functional Side of Logic Programming". [6] Don Stewart's Data.DList library. [7] Yves Bekkers and Paul Tarau. "The Monads of Difference Lists". ## Thursday, August 11, 2011 ### Embracing and extending the Levy language I started writing the post about the thing I really wanted to write about, but then I realized that the background was already a long post on its own. So, that's this post! It's about two things: 1. Levy, an implementation of call-by-push-value that Andrej Bauer and Matija Pretnar wrote in OCaml for the PL Zoo. I made a fork of Levy that, among other small changes, reduces the need for parentheses by making operator precedence more like it is in OCaml. 2. A more divergent fork of Levy that adds datatypes in a familiar-but-as-yet-unmotivated way. This diverges from the goals of Levy - as a PL Zoo project, Levy aims to be a clean and uncluttered language implementation, and that's not the case for this more divergent fork. I'll call this new version of Levy... let's see... Levy#. Sure, that works. Other things have been written about Levy at Andrej's blog and LtU. ## Basic concepts The big idea of call-by-push-value, and, in turn, Levy, is that it is a programming language that is incredibly picky about the difference between code that builds data (I'll call this value code as shorthand) and code that does stuff (I'll call this computation code). Value code tends to be really boring: 3 builds an integer value and true builds a boolean value. Levy also builds in a couple of total binary operations on integers, so 3 + x and y < 12 are both viewed by Levy as value code. Variables are always value code. Computation code is more interesting. If-statements are computation code: if v then c1 else c2 examines the value built by v and then either does the stuff described by c1 or else does the stuff described by c2. Functions are computation code: fun x: ty -> c receives a value and then does the stuff described by c, possibly using the value it received. The computation code return v does a very simple thing: it takes the value built by v and says "look, I made you a value." If the value code v had value type ty, then the computation code return v has computation type F ty. So yeah: I hadn't talked about types yet, but pieces of value code are given value types and pieces of computation code are given computation types.1 Which brings us to an interesting kind of value code that I didn't mention before: thunk c wraps up (and suspends) the computation code c in a value; if the computation code c had computation type ty, then the value code thunk c has value type U ty. The typical type of a function that would be written as int -> int in ML is U (int -> F int). It has to be a value type (ML functions are values), hence the "U", and the computation code in the function wants to take a value, do some stuff, and eventually return an integer. Here's an implementation of the absolute value function: val abs = thunk fun n: int -> if n < 0 then return 0 - n else return n ;; In order to use the absolute value function, we have to "un-thunk" the computation code inside of the value: the computation code force v does this. Where we'd write (abs 4) in ML, we write force abs 4 in Levy (force binds more tightly than anything, including application). ## Evaluation order A consequence of being persnickety about the difference between value code and computation code is that Levy is very explicit about sequencing. The SML/Haskell/OCaml code "foo(abs 4,abs 5)" means "call foo with the absolute values of 4 and 5." One occasionally hard-learned lesson for people who have programmed in both SML and OCaml is that, if abs contains effects, this code might do different things: SML has a specified left-to-right evaluation order, but OCaml's evaluation order is unspecified, and in practice it's right-to-left. Such underspecification is impossible in Levy, since calling a function is a computation, and computations can't be included directly in values! Instead, if I have computation code c of type F ty, we expect it to do some stuff and then say "look, I made you a value (of type ty)." The computation code c1 to x in c2 does the stuff described by c1, and when that stuff involves saying "look, I made you a value", that value is bound to x and used to do the stuff described by c2. Therefore, when translate our SML/Haskell/OCaml code above to Levy, we have to explicitly write either force abs 4 to x1 in force abs 5 to x2 in force foo x1 x2 or force abs 5 to x2 in force abs 4 to x1 in force foo x1 x2 That concludes the introduction of Levy, (my slightly de-parenthesized fork of) the language implemented by Bauer and Pretnar. There are two example files here (discusses basics) and here (discusses functions). ## Matching The first new feature in Levy# is the addition of a new kind of computation code, match v with | pat1 -> c1 ... that is a generalization of both let-expressions let x be v in c and if-expressions if v then c1 else c2 from Levy. Patterns like pat1 are a refinement of value code: constants are patterns, variables are patterns (variables in patterns are binding occurrences, like in most functional languages), but addition is not a pattern, and variables can only appear once in a pattern. The simplest new thing that we can do with this language construct is write computation code that mimics a switch statement in C: val f = thunk fun x: int -> match x with | 1 -> return 4 | 2 -> return 3 | 3 -> return 2 | 4 -> return 1 | 5 -> return 0 | y -> return (x + y) ;; This function is included as an example in the "datatype" fork of Levy# on Github here; let's look at what it actually does in the interpreter: bash-3.2\$ ./levy.byte switch.levy ... snip ... Levy. Press Ctrl-D to exit. Levy> force f 1 ;; comp F int = return 4 Levy> force f 3 ;; comp F int = return 2 Levy> force f 5 ;; comp F int = return 0 Levy> force f 7 ;; comp F int = return 14 That's what I expected to happen; hopefully it's what you expected to happen too. Let's declare victory and move on. ## Using datatypes The main innovation of Levy# is the introduction of an inductive datatype mechanism, which is basically a bedrock component of every functional programming language (that isn't a Lisp) ever. This should have been a relatively small change, but parser generators are the worst and I'm bad at them so I basically had to rewrite the parser to have a new phase. Sigh. I'll start with showing how datatypes are used, because that's a little more standard than the way I declare them. It would look exactly like the uses of ML datatypes, except that I curry them in a Haskell-ey fashion. The function sumlist, which sums up a list of integers, looks like this: val sumlist = thunk rec sumlist : list -> F int is fun xs : list -> match xs with | Nil -> return 0 | Cons x xs -> force sumlist xs to y in return x + y ;; The other thing that's new in the above example is the use of Levy's fixedpoint operator, but there's nothing interesting there from the perspective of reading the code. This and other examples are in datatype.levy, by the way. Side note: I'm lazy but still wanted a realistic execution model *and* coverage checking (redundant/non-exhaustive matches raise warnings), so I limited the current version of Levy# to depth-1 pattern matching: Levy> match v4 with | Cons x1 (Cons x2 xs) -> return 0 ;; Type error: Cons x2 xs not allowed in pattern (depth-1 pattern matching only) If I was doing this for real I'd implement real coverage checking and pattern compilation as a code transformation, which would leave the depth-1 pattern matching interpreter unchanged. So that's the use of datatypes; one upshot is that my pieces of value code are maybe a bit more interesting than they used to be: I can make big piece value code like Cons 4 (Cons 9 (Cons 16 (Cons (-1) Nil))) to create a big piece of data. I think that's kind of interesting, at least. ## Declaring datatypes Datatypes are just a way of telling the computer that you want to work with a particular set of inductively defined structures (at least this is what strictly positive datatypes, the only ones Levy# allows, are doing). A point that I've haltingly tried to make in a previous post was that, well, if you're going to write down a particular set of inductively defined terms, a canonical-forms based logical framework like Linear LF is a perfectly good language to write that in. So our constants look basically like the declaration of constants in a Linear LF signature, except that we make the constants uppercase, following ML conventions: data Nil: list | Cons: int -o list -o list It's a little point, but it's kind of a big deal: we can understand all the values in Levy# as canonical forms in a simply-typed linear lambda calculus with integers and some arithmetic operations - kind of like Twelf with the integer constraint domain extension.2 The reason it's a big deal is that we can now observe that the two kinds of "function types" we have in this language are way different. There's a computational arrow ty1 -> ty2 that we inhereted from Levy, and then there's a Linear LF linear implication arrow, a value arrow ty1 -o ty2 that we use in value code to construct patterns and values. I didn't invent this idea, it's actually a subset of what Licata and Harper propose in "A Universe of Binding and Computation" (ICFP 2009). Dan Licata would call the Linear LF arrow a "representational arrow," which is probably a better name than "value arrow." Of course, the values that we actually use are, without exception, at atomic type; if we try to even write down Cons 4, which seems like it would have type list -o list, we get a type error from Levy#: constructor Cons expects 2 arg(s), given 1. So by requiring that all the Linear LF terms (i.e. value code) be canonical (beta-normal and eta-long), I've effectively excluded any values of type ty1 -o ty2 from the syntax of the language. What would it mean to introduce value code whose type was a linear implication to Levy#? Well, that's the thing I really wanted to write about. 1 Focusing and polarization-aware readers will notice that this has something to do with focusing. In particular: positive types are value types, negative types are computation types, F is "upshift," and U is "downshift." 2 Technical note: Boolean values, from the perspective of the value language, are just two defined constants, as explained here. That's important: we have Boolean constants that are values, but the elimination form for Booleans is computation code not value code. If we had an "if" statement at the level of values, all sorts of crazy (and interesting) stuff could potentially happen. At least for the moment, I'm not considering that.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48380276560783386, "perplexity": 1926.2976300974235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125937074.8/warc/CC-MAIN-20180419223925-20180420003925-00162.warc.gz"}
https://agenda.infn.it/event/15801/?print=1
5. Theoretical Physics (CSN4) # The quest for dark sectors ## by Dr Claudia Frugiuele (Weizmann institute) Europe/Rome Aula Seminari (LNF) ### Aula Seminari #### LNF Via Enrico Fermi, 40 00044 Frascati (Roma) Description Dark sectors are ubiquitous in physics beyond the Standard Model (SM), and may play a role in explaining many of the long-standing problems of the SM such as the existence of dark matter or the electroweak hierarchy problem. By definition, dark sectors are not charged under any of the known forces. Discovering their possible existence is thus challenging. I will describe how a a broad program combining particle, nuclear and atomic physics experiments can effectively probe a large region of the parameter space. I will show how the unique signatures of such physics can already be searched for with existing/planned experiments, including neutrino-proton fixed-target experiments and precision atomic measurements.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8912219405174255, "perplexity": 1968.902896247878}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178381989.92/warc/CC-MAIN-20210308052217-20210308082217-00422.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/college-algebra-11th-edition/chapter-1-section-1-3-complex-numbers-1-3-exercises-page-104/102
College Algebra (11th Edition) $(-3+4i)^2+6(-3+4i)+25=0$ In order to see that $-3+4i$ is a solution, we plug it in and make sure the equation remains true. $(-3+4i)^2+6(-3+4i)+25=0?$ Expand the left hand side. $9-24i+16i^2-18+24i+25$ Combine like terms using the fact that $i^2=-1$. $(9-16-18+25)+(-24+24)i=0$ $(-3+4i)^2+6(-3+4i)+25=0$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8282148241996765, "perplexity": 162.90788318670803}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590051.20/warc/CC-MAIN-20180718041450-20180718061450-00007.warc.gz"}
https://scholarship.rice.edu/handle/1911/8299/browse?rpp=20&offset=5034&etal=-1&sort_by=1&type=title&starts_with=G%E2%88%A8der%3DASC&order=ASC
Now showing items 5035-5054 of 12796 • #### Hamilton Prioleau Bee  (1972) This thesis is a study of Hamilton P. Bee and his role in the American Civil War. Bee was first a Texas brigadier general, in charge of a state militia district, and then a Confederate brigadier general, in command of the ... • #### Hamiltonian theory and stochastic simulation methods for radiation belt dynamics  (2009) This thesis describes theoretical studies of adiabatic motion of relativistic charged particles in the radiation belts and numerical modeling of multi-dimensional diffusion due to interactions between electrons and plasma ... • #### Handle crushing harmonic maps between surfaces  (2016-04-20) In this thesis, we construct polynomial growth harmonic maps from once-punctured Riemann surfaces of any finite genus to any even-sided, regular, ideal polygon in the hyperbolic plane. We also establish their uniqueness ... • #### Handling Congestion and Routing Failures in Data Center Networking  (2015-09-01) Today's data center networks are made of highly reliable components. Nonetheless, given the current scale of data center networks and the bursty traffic patterns of data center applications, at any given point in time, ... • #### Hang in there...  (2018-04-03) Typically a secondary, technical deliverable, the reflected ceiling plan is a drawing used by architects to communicate the position of fixtures, mechanical penetrations, lighting and finishes to the build team. Ceilings ... (1966) (1968) • #### Haplotype block and genetic association  (2006) The recently identified (Daly et al. 2001 and Patil et al. 2001) block-like structure in the human genome has attracted much attention since each haplotype block contains limited sequence variation, which can reduce the ... • #### Hard Core Urbanism: Urban planning at Potsdamer Platz in Berlin after the German reunification  (1996) Hard Core Urbanism is the tendency to produce corporate enclaves within the fluid city. The garrison mentality denies the complex and interwoven processes exemplified by the history of Potsdamer Platz. Breaking the ... (1972) (1978) • #### Hardware Transactional Persistent Memory  (2019-01-31) Recent years have witnessed a sharp shift towards real-time data-driven and high-throughput applications, impelled by pervasive multi-core architectures and parallel programming models. This shift has spurred a broad ... • #### Hardware, wetware, and methods for precision control of gene expression  (2019-04-17) The ability to control the biochemical processes of the cell is fundamental to the goals of synthetic biology. This control implies the ability to specify the spatial, temporal, and amount of gene expression and protein ... • #### Hardware- versus Human-centric Assessment of Rehabilitation Robots  (2015-04-20) Individuals with disabilities arising from neurological injury require rehabilitation of the distal joints of the upper extremities to regain the ability to independently perform activities of daily living (ADL). Robotic ... • #### Harmonic diffeomorphisms between manifolds with bounded curvature  (1991) Let compact n-dimensional Riemannian manifolds $(M,g),\ (\widehat M,\ g)$ a diffeomorphism $u\sb0: M\to \widehat M,$ and a constant $p > n$ be given. Then sufficiently small $L\sp{p}$ bounds on the curvature of \$\widehat ... (1969) • #### Harmonic maps and the geometry of Teichmuller space  (2004) In this thesis work, we investigate the asymptotic behavior of the sectional curvatures of the Weil-Petersson metric on Teichmuller space. It is known that the sectional curvatures are negative. Our method is to investigate ... • #### Harmonic maps of trivalent trees  (1991) This thesis is a study of harmonic maps of trivalent trees into Euclidean space. The existence of such maps is established, and uniqueness is shown to hold up to a certain isotopy condition. Moreover, within its particular ... • #### Harmonic maps, heat flows, currents and singular spaces  (1995) This thesis studies some problems in geometry and analysis with techniques developed from non-linear partial differential equations, variational calculus, geometric measure theory and topology. It consists of three independent ... • #### Harmonic Wavelets Procedures and Wiener Path and Integral Methods for Response Determination and Reliability Assessment of Nonlinear Systems/Structures  (2011) In this thesis a novel approximate/analytical approach based on the concepts of stochastic averaging and of statistical linearization is developed for the response determination of nonlinear/hysteretic multi-degree-of-freedom ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6187714338302612, "perplexity": 3136.8756124303886}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540525598.55/warc/CC-MAIN-20191209225803-20191210013803-00352.warc.gz"}
https://link.springer.com/chapter/10.1007%2F978-3-319-68640-0_10
# The Maximum Edge Weight Clique Problem: Formulations and Solution Approaches • Dalila B. M. M. Fontes • Sergiy Butenko • Marco Buongiorno Nardelli • Marco Fornari • Stefano Curtarolo Chapter Part of the Springer Optimization and Its Applications book series (SOIA, volume 130) ## Abstract Given an edge-weighted graph, the maximum edge weight clique (MEWC) problem is to find a clique that maximizes the sum of edge weights within the corresponding complete subgraph. This problem generalizes the classical maximum clique problem and finds many real-world applications in molecular biology, broadband network design, pattern recognition and robotics, information retrieval, marketing, and bioinformatics among other areas. The main goal of this chapter is to provide an up-to-date review of mathematical optimization formulations and solution approaches for the MEWC problem. Information on standard benchmark instances and state-of-the-art computational results is also included. ## Notes ### Acknowledgments This work was carried out, while the second author was a visiting scholar at Texas A&M University, College Station, TX, USA, and is partially supported by scholarship SFRH/BSAB/113662/2015. Partial support by DOD-ONR (N00014-13-1-0635) NSF (CMMI-1538493) grants is also gratefully acknowledged. ## References 1. 1. Agapito, L.A., Ferretti, A., Calzolari, A., Curtarolo, S., Buongiorno Nardelli, M.: Effective and accurate representation of extended Bloch states on finite Hilbert spaces. Phys. Rev. B 88, 165127 (2013)Google Scholar 2. 2. Agapito, L.A., Fornari, M., Ceresoli, D., Ferretti, A., Curtarolo, S., Buongiorno Nardelli, M.: Accurate tight-binding Hamiltonians for two-dimensional and layered materials. Phys. Rev. B 93, 125137 (2016)Google Scholar 3. 3. Agapito, L.A., Ismail-Beigi, S., Curtarolo, S., Fornari, M., Buongiorno Nardelli, M.: Accurate tight-binding Hamiltonian matrices from ab initio calculations: minimal basis sets. Phys. Rev. B 93, 035104 (2016)Google Scholar 4. 4. Alidaee, B., Glover, F., Kochenberger, G.A., Rego, C.: A new modeling and solution approach for the number partitioning problem. Adv. Decis. Sci. 2005(2), 113–121 (2005)Google Scholar 5. 5. Alidaee, B., Glover, F., Kochenberger, G., Wang, H.: Solving the maximum edge weight clique problem via unconstrained quadratic programming. Eur. J. Oper. Res. 181(2), 592–597 (2007)Google Scholar 6. 6. Alidaee, B., Kochenberger, G., Lewis, K., Lewis, M., Wang, H.: A new approach for modeling and solving set packing problems. Eur. J. Oper. Res. 186(2), 504–512 (2008)Google Scholar 7. 7. Balas, E., Christofides, N.: A restricted Lagrangean approach to the traveling salesman problem. Math. Program. 21(1), 19–46 (1981)Google Scholar 8. 8. Balas, E., Yu, C.S.: Finding a maximum clique in an arbitrary graph. SIAM J. Comput. 15(4), 1054–1068 (1986)Google Scholar 9. 9. Ballard, D.H., Brown, C.M.: Computer Vision, 1982. Prenice-Hall, Englewood Cliffs, NJ (1982)Google Scholar 10. 10. Battiti, R., Protasi, M.: Reactive local search for the maximum clique problem 1. Algorithmica 29(4), 610–637 (2001)Google Scholar 11. 11. Butenko, S., Wilhelm, W.E.: Clique-detection models in computational biochemistry and genomics. Eur. J. Oper. Res. 173(1), 1–17 (2006)Google Scholar 12. 12. Cavique, L.: A scalable algorithm for the market basket analysis. J. Retail. Consum. Serv. 14(6), 400–407 (2007)Google Scholar 13. 13. Corman, S.R., Kuhn, T., McPhee, R.D., Dooley, K.J.: Studying complex discursive systems. Hum. Commun. Res. 28(2), 157–206 (2002)Google Scholar 14. 14. Dijkhuizen, G., Faigle, U.: A cutting-plane approach to the edge-weighted maximal clique problem. Eur. J. Oper. Res. 69(1), 121–130 (1993)Google Scholar 15. 15. Faigle, U., Garbe, R., Heerink, K., Spieker, B.: Lp-relaxations for the edge-weighted subclique problem. In: Operations Research ’93, pp. 157–160. Springer, Berlin (1994)Google Scholar 16. 16. Feo, T.A., Resende, M.G.C.: Greedy randomized adaptive search procedures. J. Glob. Optim. 6, 109–133 (1995)Google Scholar 17. 17. Fontes, D.B.M.M., Gonçalves, J.F.: Heuristic solutions for general concave minimum cost network flow problems. Networks 50(1), 67–76 (2007)Google Scholar 18. 18. Fontes, D.B.M.M., Gonçalves, J.F.: A multi-population hybrid biased random key genetic algorithm for hop-constrained trees in nonlinear cost flow networks. Optim. Lett. 7(6), 1303–1324 (2013)Google Scholar 19. 19. Fontes, D.B.M.M., Gonçalves, J.F., Fontes, F.F.: A GA approach to the maximum edge weight clique problem. In: CECNet 2016 – 6th International Conference on Electronics, Communications and Networks, Macau University of Science and Technology (2016)Google Scholar 20. 20. Fontes, D.B.M.M., Gonçalves, J.F., Fontes, F.F.: An evolutionary approach to the maximum edge weight clique problem (2017, submitted)Google Scholar 21. 21. Förster, J., Famili, I., Fu, P., Palsson, B.Ø., Nielsen, J.: Genome-scale reconstruction of the saccharomyces cerevisiae metabolic network. Genome Res. 13(2), 244–253 (2003)Google Scholar 22. 22. Gendron, B., Hertz, A., St-Louis, P.: A sequential elimination algorithm for computing bounds on the clique number of a graph. Discret. Optim. 5(3), 615–628 (2008)Google Scholar 23. 23. Glover, F., Kochenberger, G., Alidaee, B., Amini, M.: Tabu search with critical event memory: an enhanced application for binary quadratic programs. In: Meta-Heuristics, pp. 93–109. Springer, Berlin (1999)Google Scholar 24. 24. Gonçalves, J.F., Resende, M.G.: Biased random-key genetic algorithms for combinatorial optimization. J. Heuristics 17(5), 487–525 (2011)Google Scholar 25. 25. Gouveia, L., Martins, P.: Solving the maximum edge-weight clique problem in sparse graphs with compact formulations. EURO J. Comput. Optim. 3(1), 1–30 (2015)Google Scholar 26. 26. Gouveia, L., Moura, P.: Enhancing discretized formulations: the knapsack reformulation and the star reformulation. Top 20(1), 52–74 (2012)Google Scholar 27. 27. Hosseinian, S., Fontes, D.B.M.M., Butenko, S.: A quadratic approach to the maximum edge weight clique problem. In: XIII Global Optimization Workshop (2016)Google Scholar 28. 28. Hosseinian, S., Fontes, D.B.M.M., Butenko, S.: A nonconvex quadratic optimization approach to the maximum edge weight clique problem (2017, submitted)Google Scholar 29. 29. Hunting, M., Faigle, U., Kern, W.: A Lagrangian relaxation approach to the edge-weighted clique problem. Eur. J. Oper. Res. 131(1), 119–131 (2001)Google Scholar 30. 30. Johnson, E.L., Mehrotra, A., Nemhauser, G.L.: Min-cut clustering. Math. Program. 62(1–3), 133–151 (1993)Google Scholar 31. 31. Karp, R.M.: Reducibility Among Combinatorial Problems. Springer, Berlin (1972)Google Scholar 32. 32. Kochenberger, G., Glover, F., Alidaee, B., Lewis, K.: Using the unconstrained quadratic program to model and solve max 2-sat problems. Int. J. Oper. Res. 1(1–2), 89–100 (2005)Google Scholar 33. 33. Kochenberger, G.A., Glover, F., Alidaee, B., Rego, C.: An unconstrained quadratic binary programming approach to the vertex coloring problem. Ann. Oper. Res. 139(1), 229–241 (2005)Google Scholar 34. 34. Lei, T.L., Church, R.L.: On the unified dispersion problem: efficient formulations and exact algorithms. Eur. J. Oper. Res. 241(3), 622–630 (2015)Google Scholar 35. 35. Li, X., Wu, M., Kwoh, C.K., Ng, S.K.: Computational approaches for detecting protein complexes from protein interaction networks: a survey. BMC Genomics 11(1), 1 (2010)Google Scholar 36. 36. Lucena, A.: Steiner problem in graphs: Lagrangean relaxation and cutting planes. COAL Bull. 21(2), 2–8 (1992)Google Scholar 37. 37. Macambira, E.M.: An application of tabu search heuristic for the maximum edge-weighted subgraph problem. Ann. Oper. Res. 117, 175–190 (2002)Google Scholar 38. 38. Macambira, E.M., de Souza, C.C.: The edge-weighted clique problem: valid inequalities, facets and polyhedral computations. Eur. J. Oper. Res. 123, 346–371 (2000)Google Scholar 39. 39. Martí, R., Gallego, M., Duarte, A., Pardo, E.G.: Heuristics and metaheuristics for the maximum diversity problem. J. Heuristics 19(4), 591–615 (2013)Google Scholar 40. 40. Martins, P.: Extended and discretized formulations for the maximum clique problem. Comput. Oper. Res. 37(7), 1348–1358 (2010)Google Scholar 41. 41. Martins, P.: Cliques with maximum/minimum edge neighborhood and neighborhood density. Comput. Oper. Res. 39(3), 594–608 (2012)Google Scholar 42. 42. Mascia, F., Cilia, E., Brunato, M., Passerini, A.: Predicting structural and functional sites in proteins by searching for maximum-weight cliques. In: AAAI. Citeseer (2010)Google Scholar 43. 43. Mehrotra, A.: Cardinality constrained boolean quadratic polytope. Discret. Appl. Math. 79, 137–154 (1997)Google Scholar 44. 44. Mehrotra, A., Trick, M.A.: Cliques and clustering: a combinatorial approach. Oper. Res. Lett. 22(1), 1–12 (1998)Google Scholar 45. 45. Padberg, M.: The boolean quadric polytope: some characteristics, facets and relatives. Math. Program. 45(1–3), 139–172 (1989)Google Scholar 46. 46. Park, K., Lee, K., Park, S.: An extended formulation approach to the edge-weighted maximal clique problem. Eur. J. Oper. Res. 95(3), 671–682 (1996)Google Scholar 47. 47. Pisinger, D.: The quadratic knapsack problem – a survey. Discret Appl. Math. 155, 623–648 (2007)Google Scholar 48. 48. Prokopyev, O.A., Kong, N., Martinez-Torres, D.L.: The equitable dispersion problem. Eur. J. Oper. Res. 197(1), 59–67 (2009)Google Scholar 49. 49. Pullan, W.: Phased local search for the maximum clique problem. J. Comb. Optim. 12(3), 303–323 (2006)Google Scholar 50. 50. Pullan, W.: Approximating the maximum vertex/edge weighted clique using local search. J. Heuristics 14(2), 117–134 (2008)Google Scholar 51. 51. Sørensen, M.M.: New facets and a branch-and-cut algorithm for the weighted clique problem. Working paper 01–2, Department of Management Science and Logistics, The Aarhus School of Business (2001)Google Scholar 52. 52. Sørensen, M.M.: New facets and a branch-and-cut algorithm for the weighted clique problem. Eur. J. Oper. Res. 154, 57–70 (2004)Google Scholar 53. 53. Späth, H.: Heuristically determining cliques of given cardinality and with minimal cost within weighted complete graphs. Z. Oper. Res. 29(3), 125–131 (1985)Google Scholar 54. 54. Tomita, E., Akutsu, T., Matsunaga, T.: Efficient algorithms for finding maximum and maximal cliques: effective tools for bioinformatics. INTECH Open Access Publisher (2011)Google Scholar 55. 55. Wu, Q., Hao, J.K.: A review on algorithms for maximum clique problems. Eur. J. Oper. Res. 242(3), 693–709 (2015) © Springer International Publishing AG 2017 ## Authors and Affiliations • 1 • Dalila B. M. M. Fontes • 2 • Sergiy Butenko • 1 • Marco Buongiorno Nardelli • 3 • Marco Fornari • 4 • Stefano Curtarolo • 5 1. 1.Texas A&M UniversityCollege StationUSA
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9231410622596741, "perplexity": 27433.679478668208}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376831715.98/warc/CC-MAIN-20181219065932-20181219091932-00334.warc.gz"}
https://mathematica.stackexchange.com/questions/91893/how-to-service-preemptive-computations-while-running-a-librarylink-function
# How to service preemptive computations while running a LibraryLink function? Cross-posted to Wolfram Community The following is from the comparison between WSTP (MathLink) and LibraryLink in the documentation: When the Wolfram Language is waiting for a WSTP application to return a result, it can be used to service preemptive computations such as those needed for user interface operations. When a library function is running this will not happen without effort by the author of the library. (Emphasis by me.) What do I need to do to allow for preemptive computations to be serviced while running LibraryLink functions? Is there a function similar to libData->AbortQ() that I need to call from time to time? AbortQ() itself won't allow for this (I already checked). I did not find anything promising in WolframLibrary.h, but the documentation suggests that there should be a way. Update The following works, but I am not sure of its performance impact and I worry that this is just an abuse of functionality meant for something else. Periodically execute the following in the LibraryLink function (the same way you would call AbortQ()) MLINK lp = libData->getMathLink(libData); John Fultz has responded on Wolfram Community. It is sufficient to keep calling libData->AbortQ(). Embarrassingly I must have made a mistake when testing this, as it does work. I should note though that AbortQ() must be called frequently for it to be fluid, I originally only called it every second.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5004575848579407, "perplexity": 992.0341953614055}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526064.11/warc/CC-MAIN-20190719053856-20190719075856-00558.warc.gz"}