url
stringlengths
14
1.76k
text
stringlengths
100
1.02M
metadata
stringlengths
1.06k
1.1k
http://www.maplesoft.com/support/help/Maple/view.aspx?path=DEtools/checkrank
DEtools - Maple Programming Help Home : Support : Online Help : Mathematics : Differential Equations : Rif : DEtools/checkrank DEtools checkrank illustrate ranking to be used for a rifsimp calculation Calling Sequence checkrank(system, options) checkrank(system, vars, options) Parameters system - list or set of polynomially nonlinear PDEs or ODEs (may contain inequations) vars - (optional) list of the dependent variables to solve for options - (optional) sequence of options to control the behavior of checkrank Description • To simplify systems of PDEs or ODEs, a ranking must be defined over all indeterminates present in the system. The ranking allows the algorithm to select an indeterminate for which to solve algebraically when looking at an equation. The checkrank function can be used to understand how the ranking-associated options define a ranking in rifsimp. For more detailed information about rankings, please see rifsimp[ranking]. • The checkrank function takes in the system as input along with the options: vars List of dependent variables (See Following) indep=[indep vars] List of independent variables (See Following) ranking=[...] Specification of exact ranking (See rifsimp[ranking]) degree=n Use all derivatives to differential order n. • The output is a list that contains the derivatives in the system ordered from highest to lowest rank. If degree is given, all possible derivatives of all dependent variables up to the specified degree are used; otherwise, only the derivatives present in the input are used. • Default Ranking When simplifying a system of PDEs or ODEs, you may want to eliminate higher order derivatives in favor of lower order derivatives. Do this by using a ranking by differential order, as is the default for rifsimp. Unfortunately, this says nothing about how ties are broken, for example, between two third order derivatives. The breaking of ties is accomplished by first looking at the differentiations of the derivative with respect to each independent variable in turn. If they are of equal order, then the dependent variable itself is examined. The independent variable differentiations are examined in the order in which they appear in the dependency lists of the dependent variables, and the dependent variables are ordered alphabetically. So, for example, given an input system containing f(x,y,z),g(x,y,z),h(x,z), the following will hold: [x,y,z] Order of independent variables [f,g,h] Order of dependent variables f[x] < g[xx] By differential order g[xy] < f[xxz] By differential order f[xy] < g[xx] By differentiation with respect to x (x>y) Note: differential order is equal f[xzz] < g[xyz] By differentiation with respect to y g[xx] < f[xx] By dependent variable Note: differentiations are exactly equal h[xz] < f[xz] By dependent variable Note that, in the above example, the only time the dependent variable comes into play is when all differentiations are equal. • Changing the Default To change the default ranking, use the vars, $\mathrm{indep}=[...]$, or $\mathrm{ranking}=[...]$ options. The vars can be specified in two distinct ways: 1. Simple List If the vars are specified as a simple list, this option overrides the alphabetical order of the dependent variables described in the default ordering section. 2. Nested List This option gives a solving order for the dependent variables. For example, if vars were specified as [[f],[g,h]], this would tell rifsimp to rank any derivative of f greater than all derivatives of g and h. Then, and when comparing g and h, the solving order would be differential order, then differentiations, and then dependent variable name as specified by the input [g,h]. This would help in obtaining a subset of the system that is independent of f; that is, a smaller PDE system in g and h only. • The indep=[...] option provides for the specification of the independent variables for the problem, as well as the order in which differentiations are examined. So if the option indep=[x, y] were used, then f[x] would be ranked higher than f[y], but if indep=[y, x] were specified, then the opposite would be true. Examples > $\mathrm{with}\left(\mathrm{DEtools}\right):$ The first example uses the default ranking for a simple system. > $\mathrm{sys}≔\left[\frac{{ⅆ}^{2}}{ⅆ{x}^{2}}g\left(x\right)-g\left(x\right)=0,{\left(\frac{ⅆ}{ⅆx}f\left(x\right)\right)}^{3}-\left(\frac{ⅆ}{ⅆx}g\left(x\right)\right)=0\right]$ ${\mathrm{sys}}{:=}\left[\frac{{{ⅆ}}^{{2}}}{{ⅆ}{{x}}^{{2}}}{}{g}{}\left({x}\right){-}{g}{}\left({x}\right){=}{0}{,}{\left(\frac{{ⅆ}}{{ⅆ}{x}}{}{f}{}\left({x}\right)\right)}^{{3}}{-}\left(\frac{{ⅆ}}{{ⅆ}{x}}{}{g}{}\left({x}\right)\right){=}{0}\right]$ (1) > $\mathrm{checkrank}\left(\mathrm{sys}\right)$ $\left[\frac{{{ⅆ}}^{{2}}}{{ⅆ}{{x}}^{{2}}}{}{g}{}\left({x}\right){,}\frac{{ⅆ}}{{ⅆ}{x}}{}{f}{}\left({x}\right){,}\frac{{ⅆ}}{{ⅆ}{x}}{}{g}{}\left({x}\right){,}{g}{}\left({x}\right)\right]$ (2) By default, the first equation would be solved for the second order derivative in g(x), while the second equation would be solved for the first order derivative in f(x). Suppose instead that we always want to solve for g(x) before f(x). We can use vars. > $\mathrm{checkrank}\left(\mathrm{sys},\left[\left[g\right],\left[f\right]\right]\right)$ $\left[\frac{{{ⅆ}}^{{2}}}{{ⅆ}{{x}}^{{2}}}{}{g}{}\left({x}\right){,}\frac{{ⅆ}}{{ⅆ}{x}}{}{g}{}\left({x}\right){,}{g}{}\left({x}\right){,}\frac{{ⅆ}}{{ⅆ}{x}}{}{f}{}\left({x}\right)\right]$ (3) So here g(x) and all derivatives are ranked higher than f(x). The next example shows the default for a PDE system in f(x,y), g(x,y), h(y) (where we use the degree=2 option to get all second order derivatives): > $\mathrm{checkrank}\left(\left[f\left(x,y\right),g\left(x,y\right),h\left(y\right)\right],\mathrm{degree}=2\right)$ $\left[\frac{{{\partial }}^{{2}}}{{\partial }{{x}}^{{2}}}{}{f}{}\left({x}{,}{y}\right){,}\frac{{{\partial }}^{{2}}}{{\partial }{{x}}^{{2}}}{}{g}{}\left({x}{,}{y}\right){,}{0}{,}\frac{{{\partial }}^{{2}}}{{\partial }{y}{}{\partial }{x}}{}{f}{}\left({x}{,}{y}\right){,}\frac{{{\partial }}^{{2}}}{{\partial }{y}{}{\partial }{x}}{}{g}{}\left({x}{,}{y}\right){,}{0}{,}\frac{{{\partial }}^{{2}}}{{\partial }{{y}}^{{2}}}{}{f}{}\left({x}{,}{y}\right){,}\frac{{{\partial }}^{{2}}}{{\partial }{{y}}^{{2}}}{}{g}{}\left({x}{,}{y}\right){,}\frac{{{ⅆ}}^{{2}}}{{ⅆ}{{y}}^{{2}}}{}{h}{}\left({y}\right){,}\frac{{\partial }}{{\partial }{x}}{}{f}{}\left({x}{,}{y}\right){,}\frac{{\partial }}{{\partial }{x}}{}{g}{}\left({x}{,}{y}\right){,}{0}{,}\frac{{\partial }}{{\partial }{y}}{}{f}{}\left({x}{,}{y}\right){,}\frac{{\partial }}{{\partial }{y}}{}{g}{}\left({x}{,}{y}\right){,}\frac{{ⅆ}}{{ⅆ}{y}}{}{h}{}\left({y}\right){,}{f}{}\left({x}{,}{y}\right){,}{g}{}\left({x}{,}{y}\right){,}{h}{}\left({y}\right)\right]$ (4) All second order derivatives are first (first 7 entries), then the first derivatives with respect to x ahead of the first derivatives with respect to y, and finally $f\left(x,y\right)$, then $g\left(x,y\right)$, then $h\left(y\right)$. Suppose we want to eliminate higher derivatives involving y before x. We can use indep for this as follows: > $\mathrm{checkrank}\left(\left[f\left(x,y\right),g\left(x,y\right),h\left(y\right)\right],\mathrm{indep}=\left[y,x\right],\mathrm{degree}=2\right)$ $\left[\frac{{{\partial }}^{{2}}}{{\partial }{{y}}^{{2}}}{}{f}{}\left({x}{,}{y}\right){,}\frac{{{\partial }}^{{2}}}{{\partial }{{y}}^{{2}}}{}{g}{}\left({x}{,}{y}\right){,}\frac{{{ⅆ}}^{{2}}}{{ⅆ}{{y}}^{{2}}}{}{h}{}\left({y}\right){,}\frac{{{\partial }}^{{2}}}{{\partial }{y}{}{\partial }{x}}{}{f}{}\left({x}{,}{y}\right){,}\frac{{{\partial }}^{{2}}}{{\partial }{y}{}{\partial }{x}}{}{g}{}\left({x}{,}{y}\right){,}{0}{,}\frac{{{\partial }}^{{2}}}{{\partial }{{x}}^{{2}}}{}{f}{}\left({x}{,}{y}\right){,}\frac{{{\partial }}^{{2}}}{{\partial }{{x}}^{{2}}}{}{g}{}\left({x}{,}{y}\right){,}{0}{,}\frac{{\partial }}{{\partial }{y}}{}{f}{}\left({x}{,}{y}\right){,}\frac{{\partial }}{{\partial }{y}}{}{g}{}\left({x}{,}{y}\right){,}\frac{{ⅆ}}{{ⅆ}{y}}{}{h}{}\left({y}\right){,}\frac{{\partial }}{{\partial }{x}}{}{f}{}\left({x}{,}{y}\right){,}\frac{{\partial }}{{\partial }{x}}{}{g}{}\left({x}{,}{y}\right){,}{0}{,}{f}{}\left({x}{,}{y}\right){,}{g}{}\left({x}{,}{y}\right){,}{h}{}\left({y}\right)\right]$ (5) Now to eliminate f(x,y) and derivatives in terms of $g\left(x,y\right)$ and $h\left(y\right)$, and to rank y derivatives higher than x, we can combine the options to obtain the following. > $\mathrm{checkrank}\left(\left[f\left(x,y\right),g\left(x,y\right),h\left(y\right)\right],\left[\left[f\right],\left[g,h\right]\right],\mathrm{indep}=\left[y,x\right],\mathrm{degree}=2\right)$ $\left[\frac{{{\partial }}^{{2}}}{{\partial }{{y}}^{{2}}}{}{f}{}\left({x}{,}{y}\right){,}\frac{{{\partial }}^{{2}}}{{\partial }{y}{}{\partial }{x}}{}{f}{}\left({x}{,}{y}\right){,}\frac{{{\partial }}^{{2}}}{{\partial }{{x}}^{{2}}}{}{f}{}\left({x}{,}{y}\right){,}\frac{{\partial }}{{\partial }{y}}{}{f}{}\left({x}{,}{y}\right){,}\frac{{\partial }}{{\partial }{x}}{}{f}{}\left({x}{,}{y}\right){,}{f}{}\left({x}{,}{y}\right){,}\frac{{{\partial }}^{{2}}}{{\partial }{{y}}^{{2}}}{}{g}{}\left({x}{,}{y}\right){,}\frac{{{ⅆ}}^{{2}}}{{ⅆ}{{y}}^{{2}}}{}{h}{}\left({y}\right){,}\frac{{{\partial }}^{{2}}}{{\partial }{y}{}{\partial }{x}}{}{g}{}\left({x}{,}{y}\right){,}{0}{,}\frac{{{\partial }}^{{2}}}{{\partial }{{x}}^{{2}}}{}{g}{}\left({x}{,}{y}\right){,}{0}{,}\frac{{\partial }}{{\partial }{y}}{}{g}{}\left({x}{,}{y}\right){,}\frac{{ⅆ}}{{ⅆ}{y}}{}{h}{}\left({y}\right){,}\frac{{\partial }}{{\partial }{x}}{}{g}{}\left({x}{,}{y}\right){,}{0}{,}{g}{}\left({x}{,}{y}\right){,}{h}{}\left({y}\right)\right]$ (6)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 20, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9343292713165283, "perplexity": 1039.2785069434335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397213.30/warc/CC-MAIN-20160624154957-00136-ip-10-164-35-72.ec2.internal.warc.gz"}
https://en.wikibooks.org/wiki/Template:Equation/doc
# Template:Equation/doc This template, whose format is {{Equation|formula|number (optional)|alignment (left by default)}} allows equations to be offset from the main text, numbered, and anchored for later referencing using the template {{Eqnref|x}} (see final note), where x is the number of the equation. The equation is offset 10px from the text above and below. For example, {{Equation|[itex] x^2 + y^2 = z^2 [/itex]|1}} results in: (1) ${\displaystyle x^{2}+y^{2}=z^{2}}$ The number is optional and can be omitted as in {{Equation|[itex] x^2 + y^2 = z^2 [/itex]}} which gives: ${\displaystyle x^{2}+y^{2}=z^{2}}$ The third variable acts to align the equation to the left, center, or right (this last alignment is not recommended, but is available) and with an indentation of 5% of the width of the page. For example: {{Equation|[itex] x^2 + y^2 = z^2 [/itex]|2|left}} results in: (2) ${\displaystyle x^{2}+y^{2}=z^{2}}$ and {{Equation|[itex] x^2 + y^2 = z^2 [/itex]|3=right}} results in: ${\displaystyle x^{2}+y^{2}=z^{2}}$ Note: The equations are anchored by number for use with Template:Eqnref {{Eqnref|1}} {{Eqnref|2}} results in: (1) (2) Clicking on (1) or (2) jumps to the correspondingly numbered equation.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7357427477836609, "perplexity": 2335.587808104185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00288-ip-10-171-10-70.ec2.internal.warc.gz"}
http://www.bbc.com/future/story/20140606-how-do-you-weigh-a-planet
# How do you weigh a planet? Scales don’t come Earth-sized, so you may think calculating the weight of a planet is a tricky task. But it’s a lot easier to do if you use a nearby moon… Weight is measured by how much gravity acts on you, which is why you would weigh less on the Moon, which has less gravity than Earth, than you would at home. So scientists talk about mass rather than weight, as mass is the same no matter where you are. To understand how we are able to calculate the mass of a planet, you have to first start with the principle called the Law of Universal Gravitation, published in 1687 by Sir Isaac Newton. Newton’s work tells us to look at how a planet affects the things around it. First, find a planet with a handy second object nearby like a moon. Second, measure the distance from the moon to the planet. Third, time one complete orbit. This gives you a moon’s speed, and the faster the moon is going the bigger the planet must be. This only allows you to compare the relative masses of planets. To find out the actual masses of planets we had to wait for Lord Henry Cavendish’s experiment in 1797. He set up an experiment with two 150kg lead balls representing planets, and two smaller spheres, representing moons, and he measured the gravitational pull between them. Cavendish’s experiment led us to the missing piece of Newton’s puzzle, which was the value of G – the number that relates the gravitational force between two bodies to their masses and distance apart. By putting the value of G into Newton’s equation Cavendish calculated Earth’s mass to be six billion trillion tonnes, which is within 1% of our best guess today.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9301937222480774, "perplexity": 374.3624931211168}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802768497.135/warc/CC-MAIN-20141217075248-00125-ip-10-231-17-201.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/5431/formally-proving-that-a-function-is-oxn
# Formally proving that a function is $O(x^n)$ Say I have a function \begin{equation*} f(x) = ax^3 + bx^2 + cx + d,\text{ where }a > 0. \end{equation*} It's clear that for a high enough value of $x$, the $x^3$ term will dominate and I can say $f(x) \in O(x^3)$, but this doesn't seem very formal. The formal definition is $f(x) \in O(g(x))$ if constants $k, x_0 > 0$ exist, such that $0 \le f(x) \le kg(x)$ for all $x > x_0$. My question is, what are appropriate values for $k$ and $x_0$? It's easy enough to find ones that apply (say $k = a + b + c + d$). By the formal definition, all I have to do is show that these numbers exist, so does it actually matter which numbers I use? For some value of $x$, $k$ could be anywhere from $1$ to $a + b + c + d + ...$. From my understanding, it doesn't matter what numbers I pick as long as they 'work', but is this right? It seems too easy. Thanks - I changed some of the c's to k's because I think there was a clash there - hope that it was correct to do so. – anon Sep 25 '10 at 16:57 Looks fine to me. You missed one though -- I fixed it :-P – Joel Sep 25 '10 at 17:03 If $a < 0$, then the $0 \le kg(x)$ is violated... Perhaps a different definition? Or can $k < 0$? – Aryabhata Sep 25 '10 at 18:54 @Moron: Yeah, I think we should have $|f(x)|<k|g(x)|$ for $x>x_0$. For instance, $sin(x)=O(1)$ and the defintion by Joel doesn't give this. – alext87 Sep 25 '10 at 18:58 @alex: Yeah, since this seems like homework, just trying to make sure Joel knows the right definition taught in their class :-) – Aryabhata Sep 25 '10 at 19:03 The argument you are getting at as I understand is is, roughly: $x^n \in O(x^n)$ and thus $kx^n \in O(x^n)$, so $f(x)$ acts like $O(x^3) + O(x^2) + O(x) + O(1)$ which can be reduced to $O(x^3)$. So the theorem we would like to prove now is that for $n\geq m$: $f \in O(x^n)$ and $g \in O(x^m)$ implies $f + g \in O(x^n)$. Once we have this you just add up the monomials of the polynomial and that proves the result. Look at what we have, from the definitions: $$f \in O(x^n) \Rightarrow \exists x_0, k,\;\; \forall x>x_0,\;\; f(x) \leq kx^n$$ $$g \in O(x^m) \Rightarrow \exists x_1, k',\;\; \forall x>x_1,\;\; g(x) \leq kx^m$$ Let $x_2$ be the maximum of $x_0$ and $x_1$, $k''$ be the maximum of $k$ and $k'$ and add these inequalities: $$\forall x>x_2,\;\; (f+g)(x) \leq kx^n + k'x^m \leq k''(x^n + x^m) \leq 2 k'' x^n$$ Now the pair of values $(x_2,2k'')$ prove that $f+g \in O(x^n)$. To consider abstract functions like this makes the proof very easy, but it is clear that the values we exhibit to prove the existential may not be the best, although we still prove the theorem in an effective way. In particular, you could do a very detailed analysis of the functions in specific cases to get very tight bounds - in this lucky case it's not needed which is why the theorem is easier to prove in the abstract case. - HINT $\quad\rm ax^3 + bx^2 + cx + d\ \le \ (|a|+|b|+|c|+|d|)\ x^3 \$ for $\rm\ x > 1$ - Suppose you have a function $f(x)$ and $g(x)$. A really simple method (that works when $f(x)$ and $g(x)$ are polynomials) of determining a constant that works is the following. Consider $\lim_{x\rightarrow\infty}\frac{f(x)}{g(x)}$ If the limit exists and if a constant $0\leq C< \infty$. Then $C+\epsilon$ for any $\epsilon>0$ is a constant that works. To see this just apply the definition of the limit. $\forall \epsilon>0$ there exists a $x_0(\epsilon)$ such that $\forall x>x_0(\epsilon)$ we have $\left|\frac{f(x)}{g(x)}-C\right|<\epsilon$ That is $\frac{f(x)}{g(x)} < C+\epsilon$ Now you know the constant from calculating the limit and know the existence of $x_0$. Since you are always considering asymptotics when using this definition you are never concerned with the value of $x_0$ (only that it exists) It does not matter what constants you use and someone could easily use different constants to get $f(x)=O(g(x))$. This notation is to compare the growth rate of two functions. If $\lim_{x\rightarrow \infty} \frac{f(x)}{g(x)}$ does not exist or is hard to calculate then as long as you can bound it above you still have $f(x)=O(g(x))$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9652760624885559, "perplexity": 186.18674647641978}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860125524.70/warc/CC-MAIN-20160428161525-00066-ip-10-239-7-51.ec2.internal.warc.gz"}
https://pyquil.readthedocs.io/en/v2.5.2/apidocs/autogen/pyquil.gates.CCNOT.html
# CCNOT¶ pyquil.gates.CCNOT(control1, control2, target)[source] Produces a doubly-controlled NOT gate: CCNOT = [[1, 0, 0, 0, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0, 0, 0], [0, 0, 1, 0, 0, 0, 0, 0], [0, 0, 0, 1, 0, 0, 0, 0], [0, 0, 0, 0, 1, 0, 0, 0], [0, 0, 0, 0, 0, 1, 0, 0], [0, 0, 0, 0, 0, 0, 0, 1], [0, 0, 0, 0, 0, 0, 1, 0]] This gate applies to three qubit arguments to produce the controlled-controlled-not gate instruction. Parameters: control1 – The first control qubit. control2 – The second control qubit. target – The target qubit. The target qubit has an X-gate applied to it if both control qubits are in the excited state. A Gate object.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.312847763299942, "perplexity": 209.09697650563527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524502.23/warc/CC-MAIN-20190716035206-20190716061206-00189.warc.gz"}
https://zbmath.org/?q=an:0870.60007
× ## Probabilities on the Heisenberg group: limit theorems and Brownian motion.(English)Zbl 0870.60007 Lecture Notes in Mathematics. 1630. Berlin: Springer. viii, 139 p. (1996). After certain aspects of probability theory had been unified within the setting of general locally compact groups the main engagement of researchers working in the field changed to deepening the available results for special classes of groups. It was not surprising that this new development started in France where Bourbaki’s spirit of utmost generality had somewhat faded in favor of a more problem-oriented trend of investigation. More precisely, in the succession of the monographs of U. Grenander [“Probabilities on algebraic structures” (1963; Zbl 0131.34804)], K. R. Parthasarathy [“Probability measures on metric spaces” (1967; Zbl 0153.19101)] and the reviewer [“Probability measures on locally compact groups” (1977; Zbl 0376.60002)] the structurally oriented probabilists were able to help themselves to rich collections of particular examples as they were contained in the Lecture Notes of Y. Guivarc’h, M. Keane and B. Roynette [“Marches aléatoires sur les groupes de Lie” (1977; Zbl 0367.60081)] and P. Diaconis [“Group representations in probability and statistics” (1988; Zbl 0695.60012)] on random walks on Lie groups and on group representations in probability and statistics, respectively. Now, more than 20 years after the appearance of the reviewer’s monograph up-to-date treatments of parts of the subject are highly desirable. The book under review deals with probabilities on the Heisenberg group $$\mathbb{H}$$, in the not too far future we shall have access to a book by W. Hazod and the late E. Siebert on stable convolution semigroups on groups and vector spaces. The present Lecture Notes have been completed in the course of the author’s “Habilitation” at the University of Bern. They are conceived expositorily but contain a large portion of original work accomplished by the author since his “Dissertation” in 1991. The author masters a great deal of literature on only 139 pages and at the same time manages to introduce the reader to a fascinating section of structural probability theory. The subtitle of the book which reads “Limit theorems and Brownian motion” refers to almost disjoint topics. Still the author succeeds at least methodically in relating properties of Brownian motion on $$\mathbb{H}$$ (Chapter 2) to weak and almost sure limit theorems for more general semigroups, measures and random variables within the framework of $$\mathbb{H}$$ (Chapter 3). In fact, Chapter 2 is mainly devoted to the work of L. Gallardo [in: Probability measures on groups, Lect. Notes Math. 928, 96-120 (1982; Zbl 0483.60072)] on the potential theory of Brownian motion (capacities, Wiener test, Poincaré’s criterion). But also the contributions of G. Pap (1992) and P. Ohring [Proc. Am. Math. Soc. 118, No. 4, 1313-1318 (1993; Zbl 0797.43003)] on the Lindeberg and Lyapunov theorems, and those of P. Crépel and B. Roynette [Z. Wahrscheinlichkeitstheorie Verw. Geb. 39, 217-229 (1977; Zbl 0342.60028)] on the iterated logarithm (for $$\mathbb{H}$$-valued random variables) receive special attention. In Chapter 3 the author touches even more upon his own work on the subject. He discusses the lightly trimmed products of $$\mathbb{H}$$-valued random variables with emphasis on the main theorem of his Ph. D. thesis of 1991, the Marcinkiewicz-Zygmund law of large numbers, non-classical laws of the iterated logarithm [mainly following the work of H.-P. Scheffler, Publ. Math. 47, No. 3/4, 377-391 (1995; Zbl 0861.60015) and Stat. Probab. Lett. 24, No. 3, 187-192 (1995; Zbl 0832.60011)], and the two-series theorem. The book adds favorably to the literature on recent advances in probability theory on groups. It is clearly written with an elaborate introduction and a very useful first chapter on basic notions and technical preparations of the method. The rich bibliography of 176 items goes far beyond the references actually employed in the text. As for the choice of the material treated in the book, the author aimed at describing the recent state of the art by orienting himself at his own contributions. Fortunately, he did not try to update the pioneering monographs of the 60s; such an enterprise would have exceeded the habitual size of a “Habilitationsschrift” and would certainly have spoiled the attractivity immanent in this valuable handy volume that will doubtlessly encourage further research interest in probability theory on groups. ### MSC: 60B15 Probability measures on groups or semigroups, Fourier transforms, factorization 60-02 Research exposition (monographs, survey articles) pertaining to probability theory 22E25 Nilpotent and solvable Lie groups 47D06 One-parameter semigroups and linear evolution equations 60Fxx Limit theorems in probability theory Full Text:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5488389730453491, "perplexity": 923.2989813870299}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545548.56/warc/CC-MAIN-20220522125835-20220522155835-00043.warc.gz"}
http://umj.imath.kiev.ua/volumes/issues/?lang=en&year=2005&number=10
2019 Том 71 № 11 # Volume 57, № 10, 2005 Article (Russian) ### Majorant estimates for the percolation threshold of a Bernoulli field on a square lattice Ukr. Mat. Zh. - 2005. - 57, № 10. - pp. 1315–1326 We suggest a method for obtaining a monotonically decreasing sequence of upper bounds of percolation threshold of the Bernoulli random field on $Z^2$. On the basis of this sequence, we obtain a method of constructing approximations with the guaranteed exactness estimate for a percolation probability. We compute the first term $c_2 = 0,74683$ of the considered sequence. Article (Russian) ### Some remarks on a Wiener flow with coalescence Ukr. Mat. Zh. - 2005. - 57, № 10. - pp. 1327–1333 We study properties of a stochastic flow that consists of Brownian particles coalescing at contact time. Article (Russian) ### Degenerate Nevanlinna-Pick problem Ukr. Mat. Zh. - 2005. - 57, № 10. - pp. 1334–1343 A general solution of the degenerate Nevanlinna-Pick problem is described in terms of fractional-linear transformations. A resolvent matrix of the problem is obtained in the form of a J-expanding matrix of full rank. Article (Russian) ### Qualitative investigation of a singular Cauchy problem for a functional differential equation Ukr. Mat. Zh. - 2005. - 57, № 10. - pp. 1344–1358 We consider the singular Cauchy problem $$txprime(t) = f(t,x(t),x(g(t)),xprime(t),xprime(h(t))), x(0) = 0,$$ where $x: (0, τ) → ℝ, g: (0, τ) → (0, + ∞), h: (0, τ) → (0, + ∞), g(t) ≤ t$, and $h(t) ≤ t, t ∈ (0, τ)$, for linear, perturbed linear, and nonlinear equations. In each case, we prove that there exists a nonempty set of continuously differentiable solutions $x: (0, ρ] → ℝ$ ($ρ$ is sufficiently small) with required asymptotic properties. Article (Russian) ### On the distribution of the time of the first exit from an interval and the value of a jump over the boundary for processes with independent increments and random walks Ukr. Mat. Zh. - 2005. - 57, № 10. - pp. 1359–1384 For a homogeneous process with independent increments, we determine the integral transforms of the joint distribution of the first-exit time from an interval and the value of a jump of a process over the boundary at exit time and the joint distribution of the supremum, infimum, and value of the process. Article (Ukrainian) ### On properties of subdifferential mappings in Fréchet spaces Ukr. Mat. Zh. - 2005. - 57, № 10. - pp. 1385–1394 We present conditions under which the subdifferential of a proper convex lower-semicontinuous functional in a Fréchet space is a bounded upper-semicontinuous mapping. The theorem on the boundedness of a subdifferential is also new for Banach spaces. We prove a generalized Weierstrass theorem in Fréchet spaces and study a variational inequality with a set-valued mapping. Article (Ukrainian) ### Approximation of classes of analytic functions by Fourier sums in the metric of the space $L_p$ Ukr. Mat. Zh. - 2005. - 57, № 10. - pp. 1395–1408 Asymptotic equalities are established for upper bounds of approximants by Fourier partial sums in a metric of spaces $L_p,\quad 1 \leq p \leq \infty$ on classes of the Poisson integrals of periodic functions belonging to the unit ball of the space $L_1$. The results obtained are generalized to the classes of $(\psi, \overline{\beta})$-differentiable functions (in the Stepanets sense) that admit the analytical extension to a fixed strip of the complex plane. Article (Ukrainian) ### Exact order of relative widths of classes $W^r_1$ in the space $L_1$ Ukr. Mat. Zh. - 2005. - 57, № 10. - pp. 1409–1417 As $n \rightarrow \infty$ the exact order of relative widths of classes $W^r_1$ of periodic functions in the space $L_1$ is found under restrictions on higher derivatives of approximating functions. Anniversaries (Ukrainian) ### Ivan Oleksandrovych Lukovs'kyi (on his 70-th birthday) Ukr. Mat. Zh. - 2005. - 57, № 10. - pp. 1418-1419 Brief Communications (Russian) ### On domains with regular sections Ukr. Mat. Zh. - 2005. - 57, № 10. - pp. 1420–1423 We prove the generalized convexity of domains satisfying the condition of acyclicity of their sections by a certain continuously parametrized family of two-dimensional planes. Brief Communications (Russian) ### On one problem for comonotone approximation Ukr. Mat. Zh. - 2005. - 57, № 10. - pp. 1424–1429 For a comonotone approximation, we prove that an analog of the second Jackson inequality with generalized Ditzian - Totik modulus of smoothness $\omega^{\varphi}_{k, r}$ is invalid for $(k, r) = (2, 2)$ even if the constant depends on a function. Brief Communications (Russian) ### On one extremal problem for numerical series Ukr. Mat. Zh. - 2005. - 57, № 10. - pp. 1430–1434 Let $Γ$ be the set of all permutations of the natural series and let $α = \{α_j\}_{j ∈ ℕ},\; ν = \{ν_j\}_{j ∈ ℕ}$, and $η = {η_j}_{j ∈ ℕ}$ be nonnegative number sequences for which $$\left\| {\nu (\alpha \eta )_\gamma } \right\|_1 : = \sum\limits_{j = 1}^\infty {v _j \alpha _{\gamma (_j )} } \eta _{\gamma (_j )}$$ is defined for all $γ:= \{γ(j)\}_{j ∈ ℕ} ∈ Γ$ and $η ∈ l_p$. We find $\sup _{\eta :\left\| \eta \right\|_p = 1} \inf _{\gamma \in \Gamma } \left\| {\nu (\alpha \eta )_\gamma } \right\|_1$ in the case where $1 < p < ∞$. Brief Communications (Russian) ### Finite-dimensionality and growth of algebras specified by polylinearly interrelated generators Ukr. Mat. Zh. - 2005. - 57, № 10. - pp. 1435–1440 We investigate the finite-dimensionality and growth of algebras specified by a system of polylinearly interrelated generators. The results obtained are formulated in terms of a function $\rho$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9038186073303223, "perplexity": 694.3002071098599}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145774.75/warc/CC-MAIN-20200223123852-20200223153852-00276.warc.gz"}
https://scoop.eduncle.com/where-do-i-get-psp-with-solution
IIT JAM Follow February 10, 2021 11:10 am 30 pts • 0 Likes • Shares • Amol ashok pawar Psp as in previously solved paper Eduncle have a very nice quality study material regarding this. A detailed material with solved previous year questions paper as well Also you can...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9635007381439209, "perplexity": 16614.10754934797}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988986.98/warc/CC-MAIN-20210509122756-20210509152756-00383.warc.gz"}
http://kau.diva-portal.org/smash/record.jsf?pid=diva2:917049
Change search CiteExportLink to record Permanent link Direct link Cite Citation style • apa • harvard1 • ieee • modern-language-association-8th-edition • vancouver • Other style More styles Language • de-DE • en-GB • en-US • fi-FI • nn-NO • nn-NB • sv-SE • Other locale More languages Output format • html • text • asciidoc • rtf Boundedness of Hardy-type operators with a kernel: integral weighted conditions for the case $0<q<1\le p<\infty$ Karlstad University, Faculty of Health, Science and Technology (starting 2013), Department of Mathematics and Computer Science. Charles University in Prague, Department of Mathematical Analysis.ORCID iD: 0000-0003-0234-1645 2017 (English)In: Revista Matemática Complutense, ISSN 1139-1138, E-ISSN 1988-2807Article in journal (Refereed) Published ##### Abstract [en] Boundedness of a fundamental Hardy-type operator with a kernel is characterized between weighted Lebesgue spaces $L^p(v)$ and $L^q(w)$ for $0<q<1\le p<\infty$. The conditions are explicit and have a standard integral form. Springer, 2017. ##### Keyword [en] Hardy operators, Oinarov kernel, weighted Lebesgue spaces, weighted inequalities, integral operators ##### National Category Mathematical Analysis Mathematics ##### Identifiers OAI: oai:DiVA.org:kau-41241DiVA: diva2:917049 Available from: 2016-04-05 Created: 2016-04-05 Last updated: 2017-11-30Bibliographically approved ##### In thesis 1. The Weighted Space Odyssey Open this publication in new window or tab >>The Weighted Space Odyssey 2017 (English)Doctoral thesis, comprehensive summary (Other academic) ##### Abstract [en] The common topic of this thesis is boundedness of integral and supremal operators between weighted function spaces. The first type of results are characterizations of boundedness of a convolution-type operator between general weighted Lorentz spaces. Weighted Young-type convolution inequalities are obtained and an optimality property of involved domain spaces is proved. Additional provided information includes an overview of basic properties of some new function spaces appearing in the proven inequalities. In the next part, product-based bilinear and multilinear Hardy-type operators are investigated. It is characterized when a bilinear Hardy operator inequality holds either for all nonnegative or all nonnegative and nonincreasing functions on the real semiaxis. The proof technique is based on a reduction of the bilinear problems to linear ones to which known weighted inequalities are applicable. Further objects of study are iterated supremal and integral Hardy operators, a basic Hardy operator with a kernel and applications of these to more complicated weighted problems and embeddings of generalized Lorentz spaces. Several open problems related to missing cases of parameters are solved, thus completing the theory of the involved fundamental Hardy-type operators. ##### Abstract [en] Operators acting on function spaces are classical subjects of study in functional analysis. This thesis contributes to the research on this topic, focusing particularly on integral and supremal operators and weighted function spaces. Proving boundedness conditions of a convolution-type operator between weighted Lorentz spaces is the first type of a problem investigated here. The results have a form of weighted Young-type convolution inequalities, addressing also optimality properties of involved domain spaces. In addition to that, the outcome includes an overview of basic properties of some new function spaces appearing in the proven inequalities. Product-based bilinear and multilinear Hardy-type operators are another matter of focus. It is characterized when a bilinear Hardy operator inequality holds either for all nonnegative or all nonnegative and nonincreasing functions on the real semiaxis. The proof technique is based on a reduction of the bilinear problems to linear ones to which known weighted inequalities are applicable. The last part of the presented work concerns iterated supremal and integral Hardy operators, a basic Hardy operator with a kernel and applications of these to more complicated weighted problems and embeddings of generalized Lorentz spaces. Several open problems related to missing cases of parameters are solved, completing the theory of the involved fundamental Hardy-type operators. ##### Place, publisher, year, edition, pages Karlstad: Karlstads universitet, 2017. p. 57 ##### Series Karlstad University Studies, ISSN 1403-8099 ; 2017:1 ##### Keyword integral operators, supremal operators, weights, weighted function spaces, Lorentz spaces, Lebesgue spaces, convolution, Hardy inequality, multilinear operators, nonincreasing rearrangement ##### National Category Mathematical Analysis Mathematics ##### Identifiers urn:nbn:se:kau:diva-41944 (URN)978-91-7063-734-6 (ISBN)978-91-7063-735-3 (ISBN) ##### Public defence 2017-02-10, 9C203, Karlstads universitet, Karlstad, 09:00 (English) ##### Note Artikel 9 publicerad i avhandlingen som manuskript med samma titel. Available from: 2017-01-18 Created: 2016-04-28 Last updated: 2017-10-19Bibliographically approved #### Open Access in DiVA fulltext(1701 kB)19 downloads ##### File information File name FULLTEXT02.pdfFile size 1701 kBChecksum SHA-512 4bc3ef98a4d9a3425aac2479d91d6764abeab61afa6ba5aed26541068f007c82ccdf96d13a9986bcbe371d97f343e609abe6e7e403265e41bf9e14d87c04ac71 Type fulltextMimetype application/pdf Křepela, Martin #### Search in DiVA Křepela, Martin ##### By organisation Department of Mathematics and Computer Science ##### In the same journal Revista Matemática Complutense ##### On the subject Mathematical Analysis #### Search outside of DiVA GoogleGoogle Scholar Total: 19 downloads The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available doi urn-nbn #### Altmetric score doi urn-nbn Total: 92 hits CiteExportLink to record Permanent link Direct link Cite Citation style • apa • harvard1 • ieee • modern-language-association-8th-edition • vancouver • Other style More styles Language • de-DE • en-GB • en-US • fi-FI • nn-NO • nn-NB • sv-SE • Other locale More languages Output format • html • text • asciidoc • rtf
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9299370646476746, "perplexity": 9605.407082736709}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812938.85/warc/CC-MAIN-20180220110011-20180220130011-00444.warc.gz"}
https://www.physicsforums.com/threads/propagator-equation.176982/
# Propagator equation 1. Jul 14, 2007 ### ehrenfest Can someone explain to me why the equation $$\psi(x,t) = \int{U(x,t,x',t')\psi(x',t')dx'}$$ where U is the propagator has an integral? I thought you could just multiply the propogator by an initial state and get a final state? 2. Jul 14, 2007 ### meopemuk In quantum mechanics, time evolution of wave functions is represented by the time evolution operator. The formula you wrote is the most general linear operator connecting wave functions at times t and t'. 3. Jul 14, 2007 ### jostpuur Perhaps so, if you call the time evolution operator a propagator. You can get the time evolution by multiplying the state vector by a correct operator $$|\psi(t)\rangle = e^{-iH(t-t')/\hbar}|\psi(t')\rangle$$ This is quite abstract like this. If you use the position representation, then this operator is something more complicated that just a function, and this "multiplication" is not a pointwise multiplication of two functions. 4. Jul 14, 2007 ### ehrenfest OK. So for the position representation (which is the same as the X basis, right?) we have that (for a free particle): $$U(x,t;x') \equiv <x|U(t)|x'> = \int^{\infty}_{\infty}<x|p><p|x'>e^{-ip^2/2m\hbar}dp$$ which can be reduced to $$(\frac{m}{2\pi\hbar i t})^{1/2}e^{im(x-x')^2/2\hbar t}$$. So why is it not $$\psi(x,t) = (\frac{m}{2\pi\hbar i t})^{1/2}e^{im(x-x')^2/2\hbar t} \psi(x',0)$$ or simply $$\psi(x,t) = U(x,t;x') \psi(x',0)$$? Also, what exactly does the equivalence mean $$U(x,t;x') \equiv <x|U(t)|x'>$$? Last edited: Jul 14, 2007 5. Jul 14, 2007 ### plmokn2 I could well be wrong, and at best I'll give a very restricted view since I probably know less than you, but... Think about a particle with psi(x,t=0) a delta function. Over time psi spreads out, and psi(x',t)=U(x,t;x') psi(x,t=0). But then think about (the maybe unphysical I don't know) situation of a particle starting with two inifinitly thin peaks. Then at a later time, at position x', psi will be given by the contribution from the first peak which has spread out + the contribution from the second peak spread. Generalise for a continuous wavefunction and you get an integral. 6. Jul 14, 2007 ### ehrenfest That makes sense except I think you may have mixed up your primes in the expression for U(t) but still that explanation of the integral really helped. 7. Jul 14, 2007 ### meopemuk This is an amplitude for the particle to move from point x' to point x in time t. Because the particle can arrive to the point x not only from x', but from any other point in space as well. So, this expression should be integrated on x' in order to get the full amplitude of finding the particle at point x. I think that explanation given by plmokn2 is a good one. Eugene. 8. Jul 15, 2007 ### ehrenfest So I see how you can think of its as U(t) operating on the ket |x'> to get U(t;x') but how can you think of the relationship between the bra <x'| and the operator U(t;x'). What does <x'|U(t;x') "mean"? 9. Jul 15, 2007 ### meopemuk In the notation U(t;x, x') = < x| U(t) |x'> you can first apply the unitary operator U(t) to the ket vector |x'> on the right and obtain a new ket vector (which is a result of a time translation applied to |x'>), which I denote by |x', t> = U(t) |x'> In the next step you can take an inner product of |x', t> with the bra vector <x| U(t;x, x') = <x |x', t> U(t;x, x') is a complex number which can be interpreted as a matrix element of the unitary operator U(t) in the basis provided by vectors |x>. 10. Jul 15, 2007 ### jostpuur ehrenfest, you should do the exercise, where Shrodinger's equation is derived out of a time evolution defined with a propagator. After it, it becomes easier to believe in propagators, and in how they work. If the sources you are using don't explain it, you can get hints from here. Similar Discussions: Propagator equation
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9389753341674805, "perplexity": 912.404528038633}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687837.85/warc/CC-MAIN-20170921191047-20170921211047-00568.warc.gz"}
http://ndl.iitkgp.ac.in/document/VS92cmMxZ2Mrd0diTnVxQUl3Q1dsSk5uMFh0K0ZhNVZxS2luRFdhbEQwcz0
### Particle Acceleration and Fractional Transport in Turbulent ReconnectionParticle Acceleration and Fractional Transport in Turbulent Reconnection Access Restriction Open Author Isliker, Heinz ♦ Pisokas, Theophilos ♦ Vlahos, Loukas ♦ Anastasiadis, Anastasios Source United States Department of Energy Office of Scientific and Technical Information Content type Text Language English Subject Keyword ASTROPHYSICS, COSMOLOGY AND ASTRONOMY ♦ ACCELERATION ♦ DIFFUSION ♦ DISTRIBUTION ♦ ELECTRIC FIELDS ♦ ELECTRONS ♦ ENERGY SPECTRA ♦ FOKKER-PLANCK EQUATION ♦ MAGNETIC RECONNECTION ♦ PARTICLES ♦ PLASMA ♦ RANDOMNESS ♦ REFLECTION ♦ SIMULATION ♦ SPACE ♦ TRANSPORT THEORY ♦ TURBULENCE Abstract We consider a large-scale environment of turbulent reconnection that is fragmented into a number of randomly distributed unstable current sheets (UCSs), and we statistically analyze the acceleration of particles within this environment. We address two important cases of acceleration mechanisms when particles interact with the UCS: (a) electric field acceleration and (b) acceleration by reflection at contracting islands. Electrons and ions are accelerated very efficiently, attaining an energy distribution of power-law shape with an index 1–2, depending on the acceleration mechanism. The transport coefficients in energy space are estimated from test-particle simulation data, and we show that the classical Fokker–Planck (FP) equation fails to reproduce the simulation results when the transport coefficients are inserted into it and it is solved numerically. The cause for this failure is that the particles perform Levy flights in energy space, while the distributions of the energy increments exhibit power-law tails. We then use the fractional transport equation (FTE) derived by Isliker et al., whose parameters and the order of the fractional derivatives are inferred from the simulation data, and solving the FTE numerically, we show that the FTE successfully reproduces the kinetic energy distribution of the test particles. We discuss in detail the analysis of the simulation data and the criteria that allow one to judge the appropriateness of either an FTE or a classical FP equation as a transport model. ISSN 0004637X Educational Use Research Learning Resource Type Article Publisher Date 2017-11-01 Publisher Place United States Journal Astrophysical Journal Volume Number 849 Issue Number 1
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8171371221542358, "perplexity": 3004.7909053324943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655888561.21/warc/CC-MAIN-20200705184325-20200705214325-00246.warc.gz"}
https://www.optiboard.com/forums/showthread.php/1897-Test-Forum/page6?s=1c3df4d95b3df354aeda933700f9e682
1. $\triangle\nabla\triangle\nabla\triangle\nabla\triangle\nabla \triangle\nabla\triangle\nabla\triangle\nabla\triangle\nabla \triangle\nabla\triangle\nabla\triangle\nabla\triangle\nabla \triangle\nabla\triangle\nabla\triangle\nabla\triangle\nabla \triangle\nabla\triangle\nabla\triangle\nabla\triangle\nabla \triangle\nabla\triangle\nabla\triangle\nabla\triangle\nabla \triangle\nabla\triangle\nabla\triangle\nabla\triangle\nabla \triangle\nabla\triangle\nabla\triangle\nabla\triangle\nabla \triangle\nabla\triangle\nabla\triangle\nabla\triangle\nabla \triangle\nabla\triangle\nabla\triangle\nabla\triangle\nabla \triangle\nabla\triangle\nabla\triangle\nabla\triangle\nabla \triangle\nabla\triangle\nabla\triangle\nabla$ 2. Ok ??????? 3. [pong] pong [/pong] Test 5. ## kat This is a first, as the rest of mine have never gone through. Been enjoying reading so far. 6. $\approx \neq \pm \div \sqrt{3}$ x/y 7. ... Attached Thumbnails 8. .... 9. nothing new 10. just simply test 11. will someone please tell me why the spell check button does not work when I try to use it? Spelling is not my strong point and would lovnm to have some help here. O do I just not know how to use the button? But my god it's just a friggen button.......push the damn thing......not hard 12. Use Google chrome with built in spell check 13. Hey guys I am going to take the ABO and NCLE exam this coming November wanted to see which books or study materials are good. 14. This is a forum to "test" navigating the site. I believe the General Discussion forum would serve you better. 15. Testing 16. Testing Okay..... it works here, but I can't get it to give me "From computer file" only URL in other formats..... Grrrrr 17. Fv= ___D2____ + D1 1-t/nD2 18. Originally Posted by tmorse Fv= ___D2____ + D1 1-t/nD2 Thank you 19. Originally Posted by tmorse Fv= ___D2____ + D1 1-t/nD2 Hi Jacqui, I wonder if the 1-t/nD2 will stay under the D2. Here goes! Ted 20. Originally Posted by tmorse Hi Jacqui, I wonder if the 1-t/nD2 will stay under the D2. Here goes! Ted Nope, PhiTrace mentioned LaTex formatting (enabled) as the cause, so I will have to check it out. Cheers! 21. Originally Posted by tmorse Nope, PhiTrace mentioned LaTex formatting (enabled) as the cause, so I will have to check it out. Cheers! $Fv=\frac{D2}}1\m\frac{t}}n}\D2$ 22. $\frac{D2}{1-t/nD2}$ 23. Originally Posted by tmorse $\frac{D2}{1-t/nD2}$ [latex]Fv=\frac{D2}{1-t/nD2}\p\D1[\latex] 24. Originally Posted by tmorse [latex]Fv=\frac{D2}{1-t/nD2}\p\D1[\latex] ${Fv=}\frac{D2}{1-t/n}\p\{D1}$ 25. Originally Posted by tmorse ${Fv=}\frac{D2}{1-t/n}\p\{D1}$ [latex]{Fv=}\frac{D2}{1-t/n} + D1 There are currently 1 users browsing this thread. (0 members and 1 guests) #### Posting Permissions • You may not post new threads • You may not post replies • You may not post attachments • You may not edit your posts •
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 7, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5076833367347717, "perplexity": 7576.901021505131}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585321.65/warc/CC-MAIN-20211020121220-20211020151220-00564.warc.gz"}
http://www.thefullwiki.org/Receptor_(biochemistry)
# Receptor (biochemistry): Wikis Note: Many of our articles have direct quotes from sources you can cite, within the Wikipedia article! This article doesn't yet, but we're working on it! See more info or our list of citable articles. # Encyclopedia In biochemistry, a receptor is a protein molecule, embedded in either the plasma membrane or the cytoplasm of a cell, to which one or more specific kinds of signaling molecules may attach. A molecule which binds (attaches) to a receptor is called a ligand, and may be a peptide (short protein) or other small molecule, such as a neurotransmitter, a hormone, a pharmaceutical drug, or a toxin. Each kind of receptor can bind only certain ligand shapes. Each cell typically has many receptors, of many different kinds. When such binding occurs, the receptor undergoes a conformational change (a change in the three-dimensional shape of the receptor protein, without changing its sequence), which ordinarily initiates some sort of cellular response. However, some ligands (e.g. antagonists) merely block receptors without inducing any response. Ligand-induced changes in receptors result in cellular changes which constitute the biological activity of the ligands. Many functions of the human body are regulated by these receptors responding uniquely to specific molecules like this. ## Overview The shapes and actions of receptors are studied by X-ray crystallography, dual polarisation interferometry, computer modelling, and structure-function studies, which have advanced the understanding of drug action at the binding sites of receptors. Structure activity relationships correlate induced conformational changes with biomolecular activity, and are studied using dynamic techniques such as circular dichroism and dual polarisation interferometry. Transmembrane receptor:E=extracellular space; I=intracellular space; P=plasma membrane Depending on their functions and ligands, several types of receptors may be identified: ## Binding and activation Ligand binding is an equilibrium process. Ligands bind to receptors and dissociate from them according to the law of mass action. :$\left[\mathrm{Ligand}\right] \cdot \left[\mathrm{Receptor}\right]\;\;\overset{ K_d}{\rightleftharpoons}\;\;\left[\mbox{Ligand-receptor complex}\right]$ (the brackets stand for concentrations) One measure of how well a molecule fits a receptor is the binding affinity, which is inversely related to the dissociation constant Kd. A good fit corresponds with high affinity and low Kd. The final biological response (e.g. second messenger cascade or muscle contraction), is only achieved after a significant number of receptors are activated. The receptor-ligand affinity is greater than enzyme-substrate affinity.[citation needed] Whilst both interactions are specific and reversible, there is no chemical modification of the ligand as seen with the substrate upon binding to its enzyme. If the receptor exists in two states (see this picture), then the ligand binding must account for these two receptor states. For a more detailed discussion of two-state binding, which is thought to occur as an activation mechanism in many receptors see this link. ### Constitutive activity A receptor which is capable of producing its biological response in the absence of a bound ligand is said to display "constitutive activity".[1] The constitutive activity of receptors may be blocked by inverse agonist binding. Mutations in receptors that result in increased constitutive activity underlie some inherited diseases, such as precocious puberty (due to mutations in luteinizing hormone receptors) and hyperthyroidism (due to mutations in thyroid-stimulating hormone receptors). For the use of statistical mechanics in a quantitative study of the ligand-receptor binding affinity, see the comprehensive article[2] on the configuration integral. ## Agonists versus antagonists Not every ligand that binds to a receptor also activates the receptor. The following classes of ligands exist: • (Full) agonists are able to activate the receptor and result in a maximal biological response. Most natural ligands are full agonists. • Partial agonists do not activate receptors thoroughly, causing responses which are partial compared to those of full agonists. • Antagonists bind to receptors but do not activate them. This results in receptor blockage, inhibiting the binding of other agonists. • Inverse agonists reduce the activity of receptors by inhibiting their constitutive activity. ## Peripheral membrane protein receptors These receptors are relatively rare compared to the much more common types of receptors that cross the cell membrane. An example of a receptor that is a peripheral membrane protein is the elastin receptor. ## Transmembrane receptors ### Metabotropic receptors #### G protein-coupled receptors These receptors are also known as seven transmembrane receptors or 7TM receptors, because they pass through the membrane seven times. #### Receptor tyrosine kinases These receptors detect ligands and propagate signals via the tyrosine kinase of their intracellular domains. This family of receptors includes; ### Ionotropic receptors Ionotropic receptors are heteromeric or homomeric oligomers [3]. They are receptors that respond to extracellular ligands and receptors that respond to intracellular ligands. #### Extracellular ligands Receptor Ligand Ion current Nicotinic acetylcholine receptor Acetylcholine, Nicotine Na+, K+, Ca2+ [3] Glycine receptor (GlyR) Glycine, Strychnine Cl- > HCO-3 [3] GABA receptors: GABA-A, GABA-C GABA Cl- > HCO-3 [3] Glutamate receptors: NMDA receptor, AMPA receptor, and Kainate receptor Glutamate Na+, K+, Ca2+ [3] 5-HT3 receptor Serotonin Na+, K+ [3] P2X receptors ATP Ca2+, Na+, Mg2+ [3] #### Intracellular ligands Receptor Ligand Ion current cyclic nucleotide-gated ion channels cGMP (vision), cAMP and cGTP (olfaction) Na+, K+ [3] IP3 receptor IP3 Ca2+ [3] Intracellular ATP receptors ATP (closes channel)[3] K+ [3] Ryanodine receptor Ca2+ Ca2+ [3] The entire repertoire of human plasma membrane receptors is listed at the Human Plasma Membrane Receptome (http://www.receptome.org). ## Role in Genetic Disorders Many genetic disorders involve hereditary defects in receptor genes. Often, it is hard to determine whether the receptor is nonfunctional or the hormone is produced at decreased level; this gives rise to the "pseudo-hypo-" group of endocrine disorders, where there appears to be a decreased hormonal level while in fact it is the receptor that is not responding sufficiently to the hormone. ## Receptor Regulation Cells can increase (upregulate) or decrease (downregulate) the number of receptors to a given hormone or neurotransmitter to alter its sensitivity to this molecule. This is a locally acting feedback mechanism. Receptor desensitization Ligand-bound desensitation Vol. 135. No. 5 2130-2136</ref> • Uncoupling of receptor effector molecules. • Receptor sequestration (internalization).[5] ## In immune system The main receptors in the immune system are pattern recognition receptors (PRRs), Toll-like receptors (TLRs), killer activated and killer inhibitor receptors (KARs and KIRs), complement receptors, Fc receptors, B cell receptors and T cell receptors. [6] ## References 1. ^ Milligan G (December 2003). "Constitutive activity and inverse agonists of G protein-coupled receptors: a current perspective". Mol. Pharmacol. 64 (6): 1271–6. doi:10.1124/mol.64.6.1271. PMID 14645655. 2. ^ Vu-Quoc, L., Configuration integral (statistical mechanics), 2008. 3. ^ a b c d e f g h i j k l Medical Physiology, Boron & Boulpaep, ISBN 1-4160-2328-3, Elsevier Saunders 2005. Updated edition. Page 90. 4. ^ Gobeil F, et al. (2006) G-protein-coupled receptors signalling at the cell nucleus: an emerging paradigm. Can J Physiol Pharmacol. 2006 Mar-Apr;84(3-4):287-97. PMID 16902576 5. ^ G. Boulay, L. Chrbtien, D.E. Richard, AND G. Guillemettes. (1994) Short-Term Desensitization of the Angiotensin II Receptor of Bovine Adrenal Glomerulosa Cells Corresponds to a Shift from a High to a Low Affinity State. Endocrinology Vol. 135. No. 5 2130-2136 6. ^ Lippincott's Illustrated Reviews: Immunology. Paperback: 384 pages. Publisher: Lippincott Williams & Wilkins; (July 1, 2007). Language: English. ISBN 0781795435. ISBN 978-0781795432. Page 20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7213337421417236, "perplexity": 15773.869525452992}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202525.25/warc/CC-MAIN-20190321132523-20190321154523-00343.warc.gz"}
http://www.kjaro.com/browse/grade-10
# Topics in Grade 10 Math • ## Linear Systems (Solve by Graphing) ### example question: Solve by graphing. $$y=2x-1$$ $$y=4x-3$$ • ## Linear Systems (Special Case) ### example question: Solve by graphing. $$y=2$$ $$x=5$$ • ## Linear Systems (Substitution) ### example question: Solve by substitution. $$y=-x+4$$ $$y=x-2$$ • ## Linear Systems (Elimination) ### example question: Solve by elimination. $$2x+y=4$$ $$3x-y=6$$ • ## Linear Systems (Fraction and Decimals) ### example question: Solve. Remember to clear fractions and decimals first. $$\frac{x}{2}+\frac{y}{5}=2$$ $$\frac{3x}{2}-\frac{y}{5}=6$$ • ## Linear Systems (General Word Problems) ### example question: The sum of two numbers is 250. Their difference is 74. Find the numbers. • ## Linear Systems (Investment Problems) Karin deposited a total of 10,000 dollars in two separate accounts. One account paid 5% interest per annum, and the other paid 8% interest per annum. If the total interest earned after one year was $620, how much was invested at each rate? • ## Linear Systems (Mixture Problems) ### example question: Bess wants to make an almond cashew mix. If the almonds cost 2.50/kg and cashews cost 3.50/kg, then how many kilograms of each does she need to make 50 kilograms of mix that will sell for 2.90/kg? • ## Linear Systems (Money Problems) ### example question: A vending machine that only accepts dimes and quarters contains 36 coins totaling $7.20$. How many of each coin does the machine contain? • ## Linear Systems DST (Two Part Trip) ### example question: Adrielle traveled to her cottage 340 km away. Part of the trip was by bus that traveled 50km/h, and the other part of the trip was by car at 80 km/h. The total trip took 5 hours. a) How many hours did she travel by car? b) How many hours did she travel by bus? c) How many kilometres did she travel by car? d) How many kilometres did she travel by bus? • ## Linear Systems DST (Current \ Wind) ### example question: Colin took a boat trip 120 km upstream that lasted 5 hours. The return trip lasted 4 hours. a) find the speed of the boat in still water. b) find the speed of the current. • ## Linear Systems DST (Two Different Times) ### example question: Kevin left his house driving at 40 km/h. His brother Eric followed in his car 1 hour later traveling at 50 km/h. a) At what distance from their home did Eric catch up to Kevin? b) How long had Kevin been driving when they meet up? c) How long had Eric been driving when they meet up? • ## Length of a Line Segment (Analytic Geometry) ### example question: Use the formula below to determine the length between the points (2,1) and (5,7) $$\ell=\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}$$ • ## Midpoint of a Line Segment ### example question: Use the formula below to find the midpoint between A(3,2) and B(5,8). $$\text{midpoint}=\left(\frac{x_1+x_2}{2}, \frac{y_1+y_2}{2}\right)$$ • ## Equation of a Circle ### example question: Find the radius and the center of the circle that is defined by the equation: $$x^2+y^2=25$$ The general equation for a circle is $$x^2+y^2=r^2$$ • ## Slope of a Line Segment ### example question: Use the formula below to determine the slope between points A(3,2), and B(4,7) $$m=\frac{y_2-y_1}{x_2-x_1}$$ • ## Equations of Lines (Part 1) ### example question: Use the formula below to determine the equation of the line, in standard form, with a slope of 2 and passes through the point (1,3). $$y-y_1=m(x-x_1)$$ • ## x-intercept, y-intercept & Slope (Parallel and Perpendicular) ### example question: Find the parallel and perpendicular slopes to the line: $$3x+4y-12=0$$ • ## Equations of Lines (Part 2) ### example question: Determine the equation of the line that is parallel to 2x-y-5=0 and passes through (2,5). • ## Equations of Lines (Medians and Centroids) ### example question: Given a triangle with vertices A(0,9), B(14,1) and C(-2,-1) find: a) The equation of the median from A to BC. b) The equation of the median from C to AB. c) The centroid of the triangle ABC. • ## Equations of Lines (Altitudes and Orthocentres) ### example question: Given a triangle with vertices A(5,1), B(0,-9) and C(-9,3) find: a) The equation of the altitude from A to BC. b) The equation of the altitude from C to AB. c) The orthocentre of the triangle ABC. • ## Equations of Lines (Perpendicular / Right Bisectors and Circumcentre) ### example question: Given a triangle with vertices A(9,10), B(3,-8) and C(-7,2) find: a) The equation of the perpendicular bisector of BC. b) The equation of the perpendicular bisector of AB. c) The circumcentre of the triangle ABC. • ## Distance from a Point to a Line ### example question: Find the shortest distance from the point (3,4) to the line x=-2. • ## Adding and Subtracting Polynomials ### example question: Simplify $$(4x+3)-(3x-5)$$ • ## Multiplying Monomials (Exponent Laws) ### example question: Simplify $$(4x^2y)(3x^3y^2)$$ • ## Dividing Monomials (Exponent Laws) ### example question: Simplify $$\frac{8x^5}{2x}$$ • ## Multiplying (Monomial$\times$Polynomial) ### example question: Expand $$2(x+6)$$ • ## Expanding and Simplifying Polynomial Expressions (Part 1) ### example question: Expand and Simplify $$2x(x-4)-3x(x-3)$$ • ## Multiplying Binomials (FOIL) ### example question: Expand and Simplify $$(x+3)(x+4)$$ • ## Special Products (Squaring Binomials) ### example question: Expand and Simplify $$(x+3)^2$$ • ## Special Products (Product of Sum and Difference) ### example question: Expand and Simplify $$(x+5)(x-5)$$ • ## Expanding and Simplifying Polynomial Expressions (Part 2) ### example question: Expand and Simplify $$(2x+3)(x-4)+(x+2)^2$$ • ## Common Factoring ### example question: Factor $$2x+6$$ • ## Binomial Common Factoring ### example question: Factor $$3m(a+b)+2(a+b)$$ • ## Factoring by Grouping (Part 1) ### example question: Factor $$ax-by+bx-ay$$ • ## Trinomial Factoring (a=1) ### example question: Factor $$x^2+7x+12$$ • ## Trinomial Factoring (a$\neq$1) (Magic Box) ### example question: Factor ( using the magic box ) $$4x^2+17x+4$$ • ## Factoring Perfect Square Trinomials ### example question: Factor $$x^2+6x+9$$ • ## Factoring Difference of Squares ### example question: Factor $$x^2-25$$ • ## Factoring by Grouping (Part 2) ### example question: Factor $$x^2-6x+9-y^2$$ • ## Graphing Quadratic Functions (Vertex Form) ### example question: State the following and graph the function, given: $$y=(x-3)^2-4$$ Direction of opening. The coordinates of the vertex. The equation of the axis of symmetry. The domain and range. The maximum or minimum value. How the parabola is stretched or compressed if applicable. • ## Quadratic Functions (Word Problems Part 1) ### example question: The equation shows the height (h) of a baseball in metres as a function of time (t) in seconds. $$h=-5(t-2)^2+21$$ What was the maximum height? At what time did the ball reach its maximum height? What is the height of the ball after 1 second? What was the initial height of the ball? At what time does the ball hit the ground? • ## Quadratic Functions (Complete the Square) ### example question: Write the function in vertex form$y=a(x-h)^2+k $and state the coordinates of the vertex and the maximum or minimum value. $$y=x^2+6x+1$$ • ## Quadratic Functions (Word Problems Part 2) ### example question: The equation shows the height (h) of a baseball in metres as a function of time (t) in seconds. $$h=-5t^2+40t+1$$ What was the maximum height? At what time did the ball reach its maximum height? What is the height of the ball after 1 second? What was the initial height of the ball? At what time does the ball hit the ground? • ## Quadratic Equations (Solve by Factoring) ### example question: Solve. $$(x-5)(x+3)=0$$ • ## Quadratic Equations (Word Problems Part 1) ### example question: The width of a rectangle is 1cm shorter than its length. If the area is $6cm^2$, what are the dimensions? • ## Quadratic Equations (Solve using the Quadratic Formula) ### example question: The quadratic formula is: $$x=\frac{-b\pm\sqrt{b^2-4ac}}{2a}$$ Solve using the quadratic formula. $$x^2-x-12=0$$ • ## Quadratic Equations (Word Problems Part 2) ### example question: The hypotenuse of a right triangle is 25 cm. The sum of the lengths of the other two sides is 30 cm. Find the lengths of the the sides. • ## Similar Triangles ### example question: (Introductory trigonometry / trig ) Find the values of$x$and$y$. • ## Using Your Calculator ### example question: (Introductory trigonometry / trig ) Find the following to the nearest thousandth.$\sin 15°=$$\cos 73°=$• ## sohcahtoa$(S=\frac{O}{H},\;C=\frac{A}{H},\;T=\frac{O}{A})$### example question: Find$x$. ( use trig ratios ) ( trigonometry / trig ) • ## Word problems (Part 1) (Trigonometry) ### example question: The sun is at an angle of elevation of 40°. A tree casts a shadow of 25m. How tall is the tree? ( Use trig ratios ) ( trigonometry / trig ) • ## Solving Triangles (Advanced Trigonometry) (Two Triangles) ### example question: Find$BC$. ( Use trig ratios ) ( trigonometry / trig ) • ## Sine Law$\frac{a}{\sin A}=\frac{b}{\sin B}=\frac{c}{\sin C}$or$\frac{\sin A}{a}=\frac{\sin B}{b}=\frac{sin C}{c}$### example question: Find$x$. ( trigonometry / trig ) • ## Cosine Law (Unknown side - SAS)$a^2=b^2+c^2-2(b)(c)(\cos A)$### example question: Find$x$. ( trigonometry / trig ) • ## Cosine Law (Unknown Angle - SSS)$\cos A=\frac{b^2+c^2-a^2}{2bc}$### example question: Find$\angle C\$. ( trigonometry / trig ) $$0$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9050787687301636, "perplexity": 10870.476600757976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038061562.11/warc/CC-MAIN-20210411055903-20210411085903-00501.warc.gz"}
https://brilliant.org/problems/divisible-by-this-year-part-4-insanity/
# Divisible by this year??? (Part 4: INSANITY!!!!!!!!) $$n!$$ or $$n$$-factorial is the product of all integers from $$1$$ up to $$n$$ $$(n! = 1 \times 2 \times 3 \times ... \times n)$$. Let's denote $$n!!$$ be the product of all factorials from $$1!$$ up to $$n!$$ $$(n!! = 1! \times 2! \times 3! \times ... \times n!)$$. Let's also denote $$n!!!$$ be the product of all double factorials from $$1!!$$ up to $$n!!$$ $$(n!!! = 1!! \times 2!! \times 3!! \times ... \times n!!)$$. Find the maximum integral value of $$k$$ such that $$2014^k$$ divides $$2014!!!$$ You may also try these problems: Divisible by this year??? Divisible by this year??? (Part 2: Factorials) This problem is part of the set "Symphony" ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9455754160881042, "perplexity": 431.13545455685016}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267868135.87/warc/CC-MAIN-20180625150537-20180625170537-00023.warc.gz"}
http://mathoverflow.net/questions/99663/homotopy-groups-of-on
# homotopy groups of O(n) Can you give me a reference book where homotopy groups of O(n) are calculated? - $$O(n-1) \to O(n) \to S^{n-1}$$ which allow you to inductively compute the homotopy groups of $O(n)$ in terms of the homotopy of $S^{k}$, for $k < n$. But the latter is one of the main open questions in homotopy theory. Of course, real Bott periodicity tells you the homotopy groups of $O = \lim_{n\to \infty} O(n)$. By the previous fibre sequence, this is the same as $\pi_k(O(n))$ for $n>k+1$ -- the homotopy groups stabilise at that point -- since $\pi_k(S^{n-1}) = 0$ in that range. But the higher homotopy of $O(n)$ for a fixed $n$ is less tractable.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9459196925163269, "perplexity": 150.54774207022584}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115869404.51/warc/CC-MAIN-20150124161109-00140-ip-10-180-212-252.ec2.internal.warc.gz"}
http://qualitymathproducts.com/dividing-with-fraction-bars/
# Dividing with Fraction Bars The Teacher’s Guides cover this in more depth, this is a good introduction that shows how simple the normally complex “concept” of fraction division can be! Before showing how to use Fraction Bars to divide ¼ by 23, lets look at dividing 56 by 13. Looks tricky, but it can be so easy! Using the idea of fitting one amount into another, we can see that 13 “fits into” 56 twice, with 16 remaining. By comparing the remaining 16 to the divisor 13, we see 16 is half of the divisor 13. So 56 divided by 13 is 2 and 12. This is similar to the reasoning when dividing one whole number by another. For example, 17 divided by 5 is 3 with a remainder of 2. So the quotient is 3 25. In this example, we compare the remainder, 2, to the divisor, 5, and obtain the ratio 25. Now let’s look at ¼ divided by 23. Since 23 is greater than ¼, it “fits into” ¼ zero times with a remainder of ¼. So we compare the remainder ¼ to the divisor 23. To make this comparison, it is convenient to replace the first two bars by bars with parts of the same size. Now if we compare 3 shaded parts to 8 shaded parts, the ratio is 38. ¼ ÷ 23   =   312 ÷ 812   =   38 Starting with examples for students where one shaded amount fits into a second shaded amount a whole number of times, students will be able to see that division of fractions is comparing two amounts, just like division of whole numbers. In this way, division of fractions makes sense. An initial example like the one above for 56 divided by 13 where students can see that 13 fits into 56 two and one-half times is good. Later bring in the “invert and multiply” rule to show that this method gives the same answers that they can see makes sense with a few simple examples with Fraction Bars. So viewing division as comparing two amounts to see how many times greater one amount is than another, works whether the number being used are whole numbers or fractions. And once we obtain bars with parts of the same size, (i.e. common denominators) finding the quotient of two fractions is just a matter of finding the quotients of whole numbers of part of the same size.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8913390636444092, "perplexity": 664.7734177508445}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320323.17/warc/CC-MAIN-20170624184733-20170624204733-00246.warc.gz"}
http://gmatclub.com/forum/duke-fuqua-2013-calling-all-applicants-134412-1320.html?kudos=1
Find all School-related info fast with the new School-Specific MBA Forum It is currently 06 Feb 2016, 03:59 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # Duke (Fuqua) 2013 - Calling All Applicants Author Message Intern Joined: 15 Jan 2013 Posts: 22 Followers: 0 Kudos [?]: 5 [0], given: 4 Re: Duke (Fuqua) 2013 - Calling All Applicants [#permalink]  20 Feb 2013, 23:46 For all those who were invited for interview/not invited, did your application status change from Submitted to Completed-Application under review or something similar. Mine still says Submitted and I'm a little concerned that I may not have all the information in. Any information on this is most appreciated gents. - N Manager Joined: 18 Jun 2012 Posts: 145 Concentration: Healthcare, General Management GMAT Date: 09-14-2012 Followers: 1 Kudos [?]: 17 [0], given: 1 Re: Duke (Fuqua) 2013 - Calling All Applicants [#permalink]  21 Feb 2013, 06:53 chusu123 wrote: For all those who were invited for interview/not invited, did your application status change from Submitted to Completed-Application under review or something similar. Mine still says Submitted and I'm a little concerned that I may not have all the information in. Any information on this is most appreciated gents. - N I was thinking the same thing. Mine still says submitted as well, after interview. Manager Joined: 05 Nov 2009 Posts: 128 Concentration: Strategy, Finance GMAT 1: 700 Q V WE: Management Consulting (Consulting) Followers: 1 Kudos [?]: 37 [0], given: 67 Re: Duke (Fuqua) 2013 - Calling All Applicants [#permalink]  22 Feb 2013, 04:56 EChopeful12 wrote: ... I mean, 85%+ of the interview went pretty well I feel but there are a few things I wish I would have said differently... Now the wait. I know how you feel. I had mine recently, had to pause for a few seconds to answer one question. I eventually came up with what I think is a good answer, but I'm not sure whether those few seconds did any damage or not. Manager Joined: 07 Feb 2013 Posts: 64 Followers: 0 Kudos [?]: 18 [0], given: 31 Re: Duke (Fuqua) 2013 - Calling All Applicants [#permalink]  22 Feb 2013, 05:39 wpye86 wrote: Hi, Is Fuqua hard to apply in R3? (Last Round). Who would apply in R3? Share some of your insights plz. Thanks I think its generally accepted that any Elite+ school is tougher in R3. Your competing with waitlisters to round out the class and have fewer scholarship opportunities. theK wrote: EChopeful12 wrote: ... I mean, 85%+ of the interview went pretty well I feel but there are a few things I wish I would have said differently... Now the wait. I know how you feel. I had mine recently, had to pause for a few seconds to answer one question. I eventually came up with what I think is a good answer, but I'm not sure whether those few seconds did any damage or not. Repeating the question and thinking about the question for a few seconds doesn't do damage... It shows thought and can show that you're not an over rehersed automaton. It should be a good conversation. Don't worry about it. BSchool Forum Moderator Joined: 23 Oct 2012 Posts: 85 Schools: Yale '16 (M) Followers: 2 Kudos [?]: 38 [0], given: 45 Re: Duke (Fuqua) 2013 - Calling All Applicants [#permalink]  22 Feb 2013, 09:02 Just had my interview over Skype with an adcom member. I think he got the time mixed up so it ended up being 1 hour later than what was communicated earlier, so by the time it started it was 11 PM where I was ^^; I wore a suit, but my interviewer was just wearing a sweatshirt. The questions I got were basically why MBA, why Duke, strengths & weaknesses, and getting to know me as a person, but they were phrased differently so some of them kinda threw me off a bit. For example, he started with something like "tell me about where you are in life right now" so I answered with what I typically would with a "walk me through your resume" question. But I was mentally scrambling for a second or two there. He didn't have a question list (as far as I could tell), and made some notes. He seemed to have a 45-min time limit because as soon as we hit the mark, he rushed to end the interview abruptly. Overall, it was more like a question & answer and less conversational than what I had expected. Intern Joined: 08 Feb 2013 Posts: 26 Followers: 0 Kudos [?]: 2 [0], given: 2 Re: Duke (Fuqua) 2013 - Calling All Applicants [#permalink]  22 Feb 2013, 11:39 are R2 decisions released on the same day? Manager Joined: 07 Feb 2013 Posts: 64 Followers: 0 Kudos [?]: 18 [0], given: 31 Re: Duke (Fuqua) 2013 - Calling All Applicants [#permalink]  22 Feb 2013, 11:49 mrzod wrote: are R2 decisions released on the same day? My understanding is that they're released on Mar 18 via system/email. With phone calls typically coming that or following days. Manager Joined: 17 Jan 2010 Posts: 186 Awesome Level: 10++ Followers: 0 Kudos [?]: 29 [0], given: 26 Re: Duke (Fuqua) 2013 - Calling All Applicants [#permalink]  22 Feb 2013, 11:57 New tuition rates for Duke's graduate and professional schools in 2013-14 have been set: -- Fuqua School of Business: $55,300 (daytime MBA), up 4.5 percent. ouch Intern Joined: 02 Nov 2012 Posts: 27 Followers: 0 Kudos [?]: 0 [0], given: 0 Re: Duke (Fuqua) 2013 - Calling All Applicants [#permalink] 22 Feb 2013, 14:10 Guys' does anyone know anything about Duke's scholarship stats and when do they inform of scholarship decisions? Manager Joined: 17 Jan 2010 Posts: 186 Awesome Level: 10++ Followers: 0 Kudos [?]: 29 [0], given: 26 Re: Duke (Fuqua) 2013 - Calling All Applicants [#permalink] 22 Feb 2013, 14:15 sunjuu1 wrote: Guys' does anyone know anything about Duke's scholarship stats and when do they inform of scholarship decisions? something like 1/3rd of the class gets some sort of a scholarship. They inform you within a few days of your acceptance by phone call, and also a letter in your admit package. Intern Joined: 16 Feb 2013 Posts: 7 Followers: 0 Kudos [?]: 0 [0], given: 3 Re: Duke (Fuqua) 2013 - Calling All Applicants [#permalink] 22 Feb 2013, 14:20 For those who interviewed in Durham, what was the dress code that you adhered to? Just trousers and dress shirt? No suits, no tie?I know the emails they sent out mentioned business casual, but I'm just taking an informal survey anyway. Appreciate it! Director Joined: 07 Jan 2013 Posts: 758 Location: United States Concentration: Finance Followers: 5 Kudos [?]: 141 [0], given: 204 Re: Duke (Fuqua) 2013 - Calling All Applicants [#permalink] 22 Feb 2013, 14:28 SnowDay wrote: For those who interviewed in Durham, what was the dress code that you adhered to? Just trousers and dress shirt? No suits, no tie?I know the emails they sent out mentioned business casual, but I'm just taking an informal survey anyway. Appreciate it! When I was there about 50% were wearing suits and 50% were wearing either shirt/tie or shirt/sports coat. All depends on what you feel more confident in. _________________ Duke MBA Class of 2016 Director Joined: 07 Jan 2013 Posts: 758 Location: United States Concentration: Finance Followers: 5 Kudos [?]: 141 [0], given: 204 Re: Duke (Fuqua) 2013 - Calling All Applicants [#permalink] 22 Feb 2013, 14:29 bond007 wrote: New tuition rates for Duke's graduate and professional schools in 2013-14 have been set: -- Fuqua School of Business:$55,300 (daytime MBA), up 4.5 percent. ouch ahh don't remind. Is MBA really worth it? Well, I am not going to worry about the tution until I get in somewhere. _________________ Duke MBA Class of 2016 Current Student Joined: 04 Feb 2013 Posts: 44 Concentration: Finance Followers: 0 Kudos [?]: 2 [0], given: 0 Re: Duke (Fuqua) 2013 - Calling All Applicants [#permalink]  22 Feb 2013, 18:24 SnowDay wrote: For those who interviewed in Durham, what was the dress code that you adhered to? Just trousers and dress shirt? No suits, no tie?I know the emails they sent out mentioned business casual, but I'm just taking an informal survey anyway. Appreciate it! When I was there, none of the interviewers were wearing a suit but all of the interviewees were in suits Intern Joined: 07 Feb 2013 Posts: 9 Location: United States Concentration: Healthcare, Strategy GMAT 1: 710 Q47 V41 GPA: 3.22 WE: Research (Pharmaceuticals and Biotech) Followers: 0 Kudos [?]: 7 [0], given: 0 Re: Duke (Fuqua) 2013 - Calling All Applicants [#permalink]  23 Feb 2013, 14:54 I think I was probably the last to interview for R2 today at 4:00, I feel like I had a good interview and everyone else I talked to seemed to have a good interview as well, so I have no idea what that means about how they are going to select. There's no question I received during the interview that has not been previously mentioned on this thread. Good luck everyone. Director Joined: 07 Jan 2013 Posts: 758 Location: United States Concentration: Finance Followers: 5 Kudos [?]: 141 [0], given: 204 Re: Duke (Fuqua) 2013 - Calling All Applicants [#permalink]  23 Feb 2013, 14:58 ejko1984 wrote: I think I was probably the last to interview for R2 today at 4:00, I feel like I had a good interview and everyone else I talked to seemed to have a good interview as well, so I have no idea what that means about how they are going to select. There's no question I received during the interview that has not been previously mentioned on this thread. Good luck everyone. They will review our app again and try to connect it with our interview responses. The countdown begins. I can't wait to put on my Duke T-shirt. _________________ Duke MBA Class of 2016 Manager Joined: 03 Nov 2009 Posts: 65 Concentration: Technology GMAT 1: 700 Q V GPA: 3.1 Followers: 2 Kudos [?]: 12 [0], given: 5 Re: Duke (Fuqua) 2013 - Calling All Applicants [#permalink]  23 Feb 2013, 18:04 I just had my interview in SF. Very conversational, no curve balls - standard questions. Why MBA? Why Duke? How will you contribute? What do you like to do for fun? Good luck everyone! Manager Joined: 21 Sep 2012 Posts: 80 Location: China GMAT 1: 740 Q49 V42 GPA: 3.38 WE: General Management (Transportation) Followers: 2 Kudos [?]: 26 [0], given: 14 Re: Duke (Fuqua) 2013 - Calling All Applicants [#permalink]  23 Feb 2013, 20:28 Just completed my interview in Shanghai. My interview lasted exactly 45 minutes, and most of the questions (at least early on) were on the list on page 56. I was a bit surprised that I wasn't asked more about my strengths/weaknesses, though we discussed in some detail the challenges I have faced in my career to date. I could have done a better job of explaining why I am changing industries, and nearly referred to Durham as Darden (oh noes!). Fortunately I caught myself. My interviewer was a recent alumna, and I was frankly quite impressed. She is in my target industry, so I hope that my explanation of what I expected post-Duke were on-track. There are a few hours of activities in Shanghai this afternoon, and I hope to gain a bit of further insight/leave a bit of an impression. Manager Joined: 31 Oct 2011 Posts: 118 Location: China Concentration: Finance, Entrepreneurship GMAT 1: 750 Q51 V40 WE: Investment Banking (Investment Banking) Followers: 2 Kudos [?]: 11 [0], given: 3 Re: Duke (Fuqua) 2013 - Calling All Applicants [#permalink]  24 Feb 2013, 03:58 Zomba wrote: Just completed my interview in Shanghai. My interview lasted exactly 45 minutes, and most of the questions (at least early on) were on the list on page 56. I was a bit surprised that I wasn't asked more about my strengths/weaknesses, though we discussed in some detail the challenges I have faced in my career to date. I could have done a better job of explaining why I am changing industries, and nearly referred to Durham as Darden (oh noes!). Fortunately I caught myself. My interviewer was a recent alumna, and I was frankly quite impressed. She is in my target industry, so I hope that my explanation of what I expected post-Duke were on-track. There are a few hours of activities in Shanghai this afternoon, and I hope to gain a bit of further insight/leave a bit of an impression. Zomba, it was nice meeting you at the Fuqua session today. We'll have many things to share on "how to raise a baby" in the future if we are both lucky enough to make it to Fuqua. Good luck, man. Manager Joined: 21 Sep 2012 Posts: 80 Location: China GMAT 1: 740 Q49 V42 GPA: 3.38 WE: General Management (Transportation) Followers: 2 Kudos [?]: 26 [0], given: 14 Re: Duke (Fuqua) 2013 - Calling All Applicants [#permalink]  24 Feb 2013, 04:59 omnivorous wrote: Zomba, it was nice meeting you at the Fuqua session today. We'll have many things to share on "how to raise a baby" in the future if we are both lucky enough to make it to Fuqua. Good luck, man. Likewise, omnivorous. Quite a few impressive folks this afternoon at the networking event. Best of luck! Of the ~25 applicants that I met today, nearly all would be solid admits. I was more impressed with this group than many of the others I've seen (even from schools that may be more 'prestigious'). The extensive networking time was a nice touch, as I wasn't able to spend time with fellow applicants at other hub interviews. Such an event really gave me a brief glimpse of 'Team Fuqua' in action. We are all stuck in the same waiting game now, I guess. Best of luck to all! Re: Duke (Fuqua) 2013 - Calling All Applicants   [#permalink] 24 Feb 2013, 04:59 Go to page   Previous    1  ...  64   65   66   67   68   69   70  ...  92    Next  [ 1832 posts ] Similar topics Replies Last post Similar Topics: Fuqua (Duke): EMBA 2014 Calling all applicants! 0 21 Apr 2013, 08:48 517 Fuqua (Duke) Class of 2016: Calling all applicants! 1639 21 Apr 2013, 08:45 64 Duke (Fuqua) 2013 - Calling All Waitlisted Applicants 612 01 Jan 2013, 11:46 94 Duke Fuqua 2012 - Calling All Applicants 1505 13 Jul 2011, 04:12 205 Calling All Duke (Fuqua) 2011 Applicants 2695 18 May 2010, 07:33 Display posts from previous: Sort by
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2727651596069336, "perplexity": 6040.8310664021255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701146302.25/warc/CC-MAIN-20160205193906-00063-ip-10-236-182-209.ec2.internal.warc.gz"}
http://clay6.com/qa/8769/two-cards-are-drawn-with-replacement-from-a-well-shuffled-deck-of-52-cards-
Browse Questions # Two cards are drawn with replacement from a well shuffled deck of $52$ cards. Find the mean and variance for the number of aces. Toolbox: • If S is a sample space with a probability measure and X is a real valued function defined over the elements of S, then X is called a random variable. • Types of Random variables : • (1) Discrete Random variable (2) Continuous Random variable • Discrete Random Variable :If a random variable takes only a finite or a countable number of values, it is called a discrete random variable. • Continuous Random Variable :A Random Variable X is said to be continuous if it can take all possible values between certain given limits. i.e., X is said to be continuous if its values cannot be put in 1 − 1 correspondence with N, the set of Natural numbers. • The probability mass function (a discrete probability function) $P(x)$ is a function that satisfies the following properties : • (1) $P(X=x)=P(x)=P_x$ • (2) $P(x)\geq 0$ for all real $x$ • (3) $\sum P_i=1$ • Moments of a discrete random variable : • (i) About the origin : $\mu_r'=E(X^r)=\sum P_ix_i^{\Large r}$ • First moment : $\mu_1'=E(X)=\sum P_ix_i$ • Second moment : $\mu_2'=E(X^2)=\sum P_ix_i^2$ • (ii) About the mean : $\mu_n=E(X-\bar{X})^n=\sum (x_i-\bar{x})^nP_i$ • First moment : $\mu_1=0$ • Second moment : $\mu_2=E(X-\bar{X})^2=E(X^2)-[E(X)]^2=\mu_2'-(\mu_1')^2$ • $\mu_2=Var(X)$ Step 1: Let $X$ be the random variable denoting the number of aces when 2 cards are drawn,with replacement from well-shuffled pack of 52 cards. $X$ takes the values 0,1,2 Step 2: $P(X=0)$=Probability of no aces $\qquad\quad\;=\large\frac{48}{52}\times \frac{48}{52}$ $\qquad\quad\;=\large\frac{144}{169}$ $P(X=1)=$Probability of 1 ace $\qquad\quad\;=2C_1\large\frac{4}{52}\times \frac{48}{52}$ $\qquad\quad\;=2\times\large\frac{1}{13}\times \frac{12}{13}$ $\qquad\quad\;=\large\frac{24}{169}$ $P(X=2)=$Probability of 2 aces $\qquad\quad\;=\large\frac{4}{52}\times \frac{4}{52}$ $\qquad\quad\;=\large\frac{1}{169}$ Step 3: The probability distribution of $X$ is given by Step 4: Mean =$E(X)=\sum x_iP_i$ $\qquad=0\times \large\frac{144}{169}$$+1\times \large\frac{24}{169}$$+2\times \large\frac{1}{169}$ $\qquad=\large\frac{26}{169}$ $\qquad=\large\frac{2}{13}$ Step 5: Var$(X)=E(X^2)-[E(X)]^2$ $E(X^2)=\sum x_i^2P_i$ $\qquad\;\;=0\times \large\frac{144}{169}$$+1\times \large\frac{24}{169}$$+4\times \large\frac{1}{169}$ $\qquad\;\;=\large\frac{28}{169}$ $\therefore Var(X)=\large\frac{28}{169}-\frac{4}{169}$ $\qquad\qquad=\large\frac{24}{169}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7889757752418518, "perplexity": 535.7223429116626}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00031-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.nature.com/articles/s41467-021-23061-8?error=cookies_not_supported
Introduction Light is a prominent tool to probe the properties of materials and their electronic structure, as evidenced by the widespread use of light-based spectroscopies across the physical sciences1,2. Among these tools, far-field optical techniques are particularly prevalent, but are constrained by the diffraction limit and the mismatch between optical and electronic length scales to probe the response of materials only at large length scales (or, equivalently, at small momenta). Plasmon polaritons—hybrid excitations of light and free carriers—provide a mean to overcome these constraints through their ability to confine electromagnetic radiation to the nanoscale3. Graphene, in particular, supports gate-tunable plasmons characterized by an unprecedentedly strong confinement of light4,5,6. When placed near a metal, graphene plasmons (GPs) are strongly screened and acquire a nearly linear (acoustic-like) dispersion7,8,9,10 (contrasting with the square-root-type dispersion of conventional GPs). Crucially, such acoustic graphene plasmons (AGPs) in graphene–dielectric–metal (GDM) structures have been shown to exhibit even higher field confinement than conventional GPs with the same frequency, effectively squeezing light into the few-nanometer regime8,9,10,11. Recently, using scanning near-field optical microscopy, these features were exploited to experimentally measure the conductivity of graphene, σ(q,ω), across its frequency (ω) and momentum (q) dependence simultaneously8. The observation of momentum dependence implies a nonlocal response (i.e., response contributions at position r from perturbations at $${\bf{r}}^{\prime}$$), whose origin is inherently quantum mechanical. Incidentally, traditional optical spectroscopic tools cannot resolve nonlocal response in extended systems due to the intrinsically small momenta k0 ≡ ω/c carried by far-field photons. Acoustic graphene plasmons, on the other hand, can carry large momenta—up to a significant fraction of the electronic Fermi momentum kF and with group velocities asymptotically approaching the electron’s Fermi velocity vF—and so can facilitate explorations of nonlocal (i.e., q-dependent) response not only in graphene itself but also, as we detail in this Article, in nearby materials. So far, however, only aspects related to the quantum response of graphene have been addressed8, leaving any quantum nonlocal aspects of the adjacent metal’s response unattended, despite their potentially substantial impact at nanometric graphene–metal separations12,13,14,15,16. Here, we present a theoretical framework that simultaneously incorporates quantum nonlocal effects in the response of both the graphene and of the metal substrate for AGPs in GDM heterostructures. Further, our approach establishes a concrete proposal for experimentally measuring the low-frequency nonlocal electrodynamic response of metals. Our model treats graphene at the level of the nonlocal random-phase approximation (RPA)4,9,17,18,19 and describes the quantum aspects of the metal’s response—including nonlocality, electronic spill-out/spill-in, and surface-enabled Landau damping—using a set of microscopic surface-response functions known as the Feibelman d-parameters12,13,15,16,20,21. These parameters, d and d, measure the frequency-dependent centroids of the induced charge density and of the normal derivative of the tangential current density, respectively (Supplementary Note 1). Using a combination of numerics and perturbation theory, we show that the AGPs are spectrally shifted by the quantum surface-response of the metal: toward the red for $${\rm{Re}} {\,}{d}_{\perp } > \,0$$ (associated with electronic spill-out of the induced charge density) and toward the blue for $${\rm{Re}} {\,}{d}_{\perp } < \,0$$ (signaling an inward shift, or “spill-in”). Interestingly, these shifts are not accompanied by a commensurately large quantum broadening nor by a reduction of the AGP’s quality factor, thereby providing the theoretical support explaining recent experimental observations11. Finally, we discuss how state-of-the-art measurements of AGPs could be leveraged to map out the low-frequency quantum nonlocal surface response of metals experimentally. Our findings have significant implications for our ability to optimize photonic designs that interface far- and mid-infrared optical excitations—such as AGPs—with metals all the way down to the nanoscale, with pursuant applications in, e.g., ultracompact nanophotonic devices, nanometrology, and in the surface sciences more broadly. Results Theory We consider a GDM heterostructure (see Fig. 1) composed of a graphene sheet with a surface conductivity σ ≡ σ(q,ω) separated from a metal substrate by a thin dielectric slab of thickness t and relative permittivity ϵ2 ≡ ϵ2(ω); finally, the device is covered by a superstrate of relative permittivity ϵ1 ≡ ϵ1(ω). While the metal substrate may, in principle, be represented by a nonlocal and spatially non-uniform (near the interface) dielectric function, here we abstract its contributions into two parts: a bulk, local contribution via $${\epsilon }_{\text{m}}\equiv {\epsilon }_{\text{m}}(\omega )={\epsilon }_{\infty }(\omega )-{\omega }_{\text{p}}^{2}/({\omega }^{2}+\text{i}\omega {\gamma }_{\text{m}})$$, and a surface, quantum contribution included through the d-parameters. These parameters are quantum-mechanical surface-response functions, defined by the first moments of the microscopic induced charge (d) and of the normal derivative of the tangential current (d); see Fig. 1 (Supplementary Note 1 gives a concise introduction). They allow the leading-order corrections to classicality to be conveniently incorporated via a surface dipole density ( d) and a surface current density ( d)9,15,16, and can be obtained either by first-principles computation20,21, semiclassical models, or experiments15. The electromagnetic excitations of any system can be obtained by analyzing the poles of the (composite) system’s scattering coefficients. For the AGPs of a GDM structure, the relevant coefficient is the p-polarized reflection (or transmission) coefficient, whose poles are given by $$1\ -\ {r}_{p}^{2|{\mathrm{g}}| 1}\ {r}_{p}^{2|{\mathrm{m}}}\ {\text{e}}^{{\text{i}}2{k}_{z,2}t}=0$$ (ref. 22). Here, $${r}_{p}^{2| \text{g}| 1}$$ and $${r}_{p}^{2| \text{m}}$$ denote the p-polarized reflection coefficients for the dielectric–graphene–dielectric and the dielectric–metal interface (detailed in Supplementary Note 2), respectively. Each coefficient yields a material-specific contribution to the overall quantum response: $${r}_{p}^{2|\text{g}| 1}$$ incorporates graphene’s via σ(q,ω) and $${r}_{p}^{2| \text{m}}$$ incorporates the metal’s via the d-parameters (see Supplementary Note 2). The complex exponential [with $${k}_{z,2}\equiv {({\epsilon }_{2}{k}_{0}^{2}-{q}^{2})}^{1/2}$$, where q denotes the in-plane wavevector] incorporates the effects of multiple reflections within the slab. Thus, using the above-noted reflection coefficients (defined explicitly in Supplementary Note 2), we obtain a quantum-corrected AGP dispersion equation: $$\left[\frac{{\epsilon }_{1}}{{\kappa }_{1}}+\frac{{\epsilon }_{2}}{{\kappa }_{2}}+\frac{\,\text{i}\,\sigma }{\omega {\epsilon }_{0}}\right]\left[{\epsilon }_{\text{m}}{\kappa }_{2}+{\epsilon }_{2}{\kappa }_{\text{m}}-\left(\right.{\epsilon }_{\text{m}}-{\epsilon }_{2}\left)\right.\left(\right.{q}^{2}{d}_{\perp }-{\kappa }_{2}{\kappa }_{\text{m}}{d}_{\parallel }\left)\right.\right] \\ =\left[\frac{{\epsilon }_{1}}{{\kappa }_{1}}-\frac{{\epsilon }_{2}}{{\kappa }_{2}}+\frac{\,\text{i}\,\sigma }{\omega {\epsilon }_{0}}\right]\left[{\epsilon }_{\text{m}}{\kappa }_{2}-{\epsilon }_{2}{\kappa }_{\text{m}}+\left(\right.{\epsilon }_{\text{m}}-{\epsilon }_{2}\left)\right.\left(\right.{q}^{2}{d}_{\perp }+{\kappa }_{2}{\kappa }_{\text{m}}{d}_{\parallel }\left)\right.\right]{\text{e}}^{-2{\kappa }_{2}t},$$ (1) for in-plane AGP wavevector q and out-of-plane confinement factors $${\kappa }_{j}\equiv ( {q}^{2}-{\epsilon }_{j}{k}_{0}^{2} )^{1/2}$$ for j {1, 2, m}. Since AGPs are exceptionally subwavelength (with confinement factors up to almost 300)8,10,11, the nonretarded limit (wherein κj → q) constitutes an excellent approximation. In this regime, and for encapsulated graphene, i.e., where ϵd ≡ ϵ1 = ϵ2, Eq. (1) simplifies to $$\left[1+\frac{2{\epsilon }_{\text{d}}}{q}\frac{\omega {\epsilon }_{0}}{\text{i}\,\sigma }\right]\left[\frac{{\epsilon }_{\text{m}}+{\epsilon }_{\text{d}}}{{\epsilon }_{\text{m}}-{\epsilon }_{\text{d}}}-q\left(\right.{d}_{\perp }-{d}_{\parallel }\left)\right.\right]=\left[\right.1+q\left(\right.{d}_{\perp }+{d}_{\parallel }\left)\right.\left]\right.{\text{e}}^{-2qt}.$$ (2) For simplicity and concreteness, we will consider a simple jellium treatment of the metal such that d vanishes due to charge neutrality21,23, leaving only d nonzero. Next, we exploit the fact that AGPs typically span frequencies across the terahertz (THz) and mid-infrared (mid-IR) spectral ranges, i.e., well below the plasma frequency ωp of the metal. In this low-frequency regime, ωωp, the frequency dependence of d (and d) has the universal, asymptotic dependence $${d}_{\perp }(\omega )\simeq \zeta +\,\text{i}\frac{\omega }{{\omega }_{\text{p}}}\xi \,\qquad (\text{for}\,\,\omega \ll {\omega }_{\text{p}}),$$ (3) as shown by Persson et al.24,25 by exploiting Kramers–Kronig relations. Here, ζ is the so-called static image-plane position, i.e., the centroid of induced charge under a static, external field26, and ξ defines a phase-space coefficient for low-frequency electron–hole pair creation, whose rate is qωξ21: both are ground-state quantities. In the jellium approximation of the interacting electron liquid, the constants ζ ≡ ζ(rs) and ξ ≡ ξ(rs) depend solely on the carrier density ne, here parameterized by the Wigner–Seitz radius $${r}_{s}{a}_{\text{B}}\equiv {(3{n}_{\text{e}}/4\pi )}^{1/3}$$ (Bohr radius, aB). In the following, we exploit the simple asymptotic relation in Eq. (3) to calculate the dispersion of AGPs with metallic (in addition to graphene’s) quantum response included. Quantum corrections in AGPs due to metallic quantum surface-response The spectrum of AGPs calculated classically and with quantum corrections is shown in Fig. 2. Three models are considered: one, a completely classical, local-response approximation treatment of both the graphene and the metal; and two others, in which graphene’s response is treated by the nonlocal RPA4,9,17,18,19 while the metal’s response is treated either classically or with quantum surface-response included (via the d-parameter). As noted previously, we adopt a jellium approximation for the d-parameter. Figure 2a shows that—for a fixed wavevector—the AGP’s resonance blueshifts upon inclusion of graphene’s quantum response, followed by a redshift due to the quantum surface-response of the metal (since $${\rm{Re}} {\,}{d}_{\perp } > \,0$$ for jellium metals; electronic spill-out)13,15,16,21,27,28. This redshifting due to the metal’s quantum surface-response is opposite to that predicted by the semiclassical hydrodynamic model (HDM) where the result is always a blueshift14 (corresponding to $${\rm{Re}}{\,}{d}_{\perp }^{\text{HDM}} < \,0$$; electronic “spill-in”) due to the neglect of spill-out effects29. The imaginary part of the AGP’s wavevector (that characterizes the mode’s propagation length) is shown in Fig. 2b: the net effect of the inclusion of d is a small, albeit consistent, increase of this imaginary component. Notwithstanding this, the modification of $${\rm{Im}}\, q$$ is not independent of the shift in $${\rm{Re}}{\,}q$$; as a result, an increase in $${\rm{Im}}{\,}q$$ does not necessarily imply the presence of a significant quantum decay channel [e.g., an increase of $${\rm{Im}}{\,}q$$ can simply result from increased classical loss (i.e., arising from local response alone) at the newly shifted $${\rm{Re}}\, q$$ position]. Because of this, we inspect the quality factor $$Q\equiv {\rm{Re}}{\,}q/{\rm{Im}}{\,}q$$ (or “inverse damping ratio”30,31) instead32 (Fig. 2c), which provides a complementary perspective that emphasizes the effective (or normalized) propagation length rather than the absolute length. The incorporation of quantum mechanical effects, first in graphene alone, and then in both graphene and metal, reduces the AGP’s quality factor. Still, the impact of metal-related quantum losses in the latter is negligible, as evidenced by the nearly overlapping black and red curves in Fig. 2c. To better understand these observations, we treat the AGP’s q-shift due to the metal’s quantum surface-response as a perturbation: writing q = q0 + q1, we find that the quantum correction from the metal is q1q0d/(2t), for a jellium adjacent to vacuum in the $${\omega }^{2}/{\omega }_{\text{p}}^{2}\ll {q}_{0}t\ll 1$$ limit (Supplementary Note 3). This simple result, together with Eq. (3), provides a near-quantitative account of the AGP dispersion shifts due to metallic quantum surface-response: for ωωp, (i) $${\rm{Re}}\, {d}_{\perp }$$ tends to a finite value, ζ, which increases (decreases) $${\rm{Re}}\, q$$ for ζ > 0 (ζ < 0); and (ii) $${\rm{Im}}{\,}{d}_{\perp }$$ is $$\mathop{\propto}\omega$$ and therefore asymptotically vanishing as ω/ωp → 0 and so only negligibly increases $${\rm{Im}}\, q$$. Moreover, the preceding perturbative analysis warrants $${\rm{Re}}{\,}{q}_{1}/{\rm{Re}}{\,}{q}_{0} \approx {\rm{Im}}{\,}{q}_{1}/{\rm{Im}}{\,}{q}_{0}$$ (Supplementary Note 3), which elucidates the reason why the AGP’s quality factor remains essentially unaffected by the inclusion of metallic quantum surface-response. Notably, these results explain recent experimental observations that found appreciable spectral shifts but negligible additional broadening due to metallic quantum response10,11. Next, by considering the separation between graphene and the metallic interface as a renormalizable parameter, we find a complementary and instructive perspective on the impact of metallic quantum surface-response. Specifically, within the spectral range of interest for AGPs (i.e., ωωp), we find that the “bare” graphene–metal separation t is effectively renormalized due to the metal’s quantum surface-response from t to $$\tilde{t}\equiv t-s$$, where sdζ (see Supplementary Note 4), corresponding to a physical picture where the metal’s interface lies at the centroid of its induced density (i.e., $${\rm{Re}}{\,}{d}_{\perp }$$) rather than at its “classical” jellium edge. With this approach, the form of the dispersion equation is unchanged but references the renormalized separation $$\tilde{t}$$ instead of its bare counterpart t, i.e.: $$1+\frac{2{\epsilon }_{\text{d}}}{q}\frac{\omega {\epsilon }_{0}}{\text{i}\sigma }=\frac{{\epsilon }_{\text{m}}-{\epsilon }_{\text{d}}}{{\epsilon }_{\text{m}}+{\epsilon }_{\text{d}}}\ {\text{e}}^{-2q\tilde{t}},$$ (4) This perspective, for instance, has substantial implications for the analysis and understanding of plasmon rulers33,34,35 at nanometric scales. Furthermore, our findings additionally suggest an interesting experimental opportunity: as all other experimental parameters can be well-characterized by independent means (including the nonlocal conductivity of graphene), high-precision measurements of the AGP’s dispersion can enable the characterization of the low-frequency metallic quantum response—a regime that has otherwise been inaccessible in conventional metal-only plasmonics. The underlying idea is illustrated in Fig. 3; depending on the sign of the static asymptote ζ ≡ d(0), the AGP’s dispersion shifts toward larger q (smaller ω; redshift) for ζ > 0 and toward smaller q (larger ω; blueshift) for ζ < 0. As noted above, the q-shift is ~q0ζ/(2t). Crucially, despite the ångström-scale of ζ, this shift can be sizable: the inverse scaling with the spacer thickness t effectively amplifies the attainable shifts in q, reaching up to several μm−1 for few-nanometer t. We stress that these regimes are well within current state-of-the-art experimental capabilities8,10,11, suggesting a new path toward the systematic exploration of the static quantum response of metals. Probing the quantum surface-response of metals with AGPs The key parameter that regulates the impact of quantum surface corrections stemming from the metal is the graphene–metal separation, t (analogously to the observations of nonclassical effects in conventional plasmons at narrow metal gaps13,36,37); see Fig. 4. For the experimentally representative parameters indicated in Fig. 4, these come into effect for t 5 nm, growing rapidly upon decreasing the graphene–metal separation further. Chiefly, ignoring the nonlocal response of the metal leads to a consistent overestimation (underestimation) of AGP’s wavevector (group velocity) for d < 0, and vice versa for d > 0 (Fig. 4a); this behavior is consistent with the effective renormalization of the graphene–metal separation mentioned earlier (Fig. 4b). Finally, we analyze the interplay of both t and EF and their joint influence on the magnitude of the quantum corrections from the metal (we take d = −4 Å, which is reasonable for the Au substrate used in recent AGP experiments7,8,11); in Fig. 4c we show the relative wavevector quantum shift (excited at λ0 = 11.28 μm32). In the few-nanometer regime, the quantum corrections to the AGP wavevector approach 5%, increasing further as t decreases—for instance, in the extreme, one-atom-thick limit (t ≈ 0.7 nm11, which also approximately coincides with edge of the validity of the d-parameter framework, i.e., t 1 nm15) the AGP’s wavevector can change by as much as 10% for moderate graphene doping. The pronounced Fermi level dependence exhibited in Fig. 4c also suggests a complementary approach for measuring the metal’s quantum surface-response even if an experimental parameter is unknown (although, as previously noted, all relevant experimental parameters can in fact be characterized using currently available techniques8,10,11,15): such an unknown variable can be fitted at low EF using the “classical” theory (i.e., with d = d = 0), since the impact of metallic quantum response is negligible in that regime. A parameter-free assessment of the metal’s quantum surface-response can then be carried out subsequently by increasing EF (and with it, the metal-induced quantum shift). We emphasize that this can be accomplished in the same device by doping graphene using standard electrostatic gating8,10,11. Discussion In this Article, we have presented a theoretical account that establishes and quantifies the influence of the metal’s quantum response for AGPs in hybrid GDM structures. We have demonstrated that the nanoscale confinement of electromagnetic fields inherent to AGPs can be harnessed to determine the quantum surface-response of metals in the THz and mid-IR spectral ranges (which is typically inaccessible with traditional metal-based plasmonics). Additionally, our findings elucidate and contextualize recent experiments10,11 that have reported the observation of nonclassical spectral shifting of AGPs due to metallic quantum response but without a clear concomitant increase of damping, even for atomically thin graphene–metal separations. Our results also demonstrate that the metal’s quantum surface-response needs to be rigorously accounted for—e.g., using the framework developed here—when searching for signatures of many-body effects in the graphene electron liquid imprinted in the spectrum of AGPs in GDM systems8, since the metal’s quantum-surface response can lead to qualitatively similar dispersion shifts, as shown here. In passing, we emphasize that our framework can be readily generalized to more complex graphene–metal hybrid structures either by semi-analytical approaches (e.g., the Fourier modal method38 for periodically nanopatterned systems) or by direct implementation in commercially available numerical solvers (see refs. 15,39), simply by adopting d-parameter-corrected boundary conditions15,16. Further, our formalism provides a transparent theoretical foundation for guiding experimental measurements of the quantum surface-response of metals using AGPs. The quantitative knowledge of the metal’s low-frequency, static quantum response is of practical utility in a plethora of scenarios, enabling, for instance, the incorporation of leading-order quantum corrections to the classical electrostatic image theory of particle–surface interaction20 as well as to the van der Waals interaction21,25,40 affecting atoms or molecules near metal surfaces. Another prospect suggested by our findings is the experimental determination of ζ ≡ d(0) through measurements of the AGP’s spectrum. This highlights a new metric for comparing the fidelity of first-principle calculations of different metals (inasmuch as ab initio methods can yield disparate results depending on the chosen scheme or functional)41,42 with explicit measurements. Our results also highlight that AGPs can be extremely sensitive probes for nanometrology as plasmon rulers, while simultaneously underscoring the importance of incorporating quantum response in the characterization of such rulers at (sub)nanometric scales. Finally, the theory introduced here further suggests additional directions for exploiting AGP’s high-sensitivity, e.g., to explore the physics governing the complex electron dynamics at the surfaces of superconductors43 and other strongly correlated systems.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8989700675010681, "perplexity": 2296.2155799589577}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950110.72/warc/CC-MAIN-20230401160259-20230401190259-00305.warc.gz"}
https://heattransfer.asmedigitalcollection.asme.org/ICONE/proceedings-abstract/ICONE28/85253/1122407
## Abstract In this study, the system thermal-hydraulic code LOCUST is applied to simulate reflood heat transfer experiments conducted in the RBHT facility, and the effect of spacer grids are considered. The calculation results of LOCUST are compared with experimental data and the calculations of RELAP5 4.0. The results show that both LOCUST and RELAP5 4.0 are capable of predicting the reflood behaviors at a satisfactory level. Besides, the calculations of cladding temperature and heat transfer coefficient are generally in good agreement with experimental data. When spacer grids are introduced, PCT calculated by RELAP5 4.0 and LOCUST are 1178K and 1201K, respectively. However, the calculations without spacer girds are 1184K and 1206K, respectively. The comparison reveals that a decrease of around 5K in the PCT occurs due to spacer grids. This content is only available via PDF.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8735396265983582, "perplexity": 2665.808496674988}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945368.6/warc/CC-MAIN-20230325161021-20230325191021-00752.warc.gz"}
https://tomatoheart.com/nanowhiz/viral-math-problem-in-japan/
Sometimes things are not at easy at it might seem. Almost all of us ( the so called literate people ) know the basics of Mathematics very well and the knowledge of addition, subtraction, multiplication and division is kind of four fundamental pillars of Mathematics. but what if I say, you may not able to solve this very simple problem of math which only involves three of them. The question is: According to YouTube  Channel “MindYourDecisions” , This problem went viral in Japan after a study found 60 percent of 20 somethings could get the correct answer. In a similar study that was conducted in 1980s, the success rate was 90 percent. Let me explain a common mistake and how to get the correct answer by using the order of operations. The culprit behind this problem. Probably the increased use of calculators. They are on every phone and that makes it easy for students to rely on them to find an answer to a simple math problem like the one above. However, enter this simple question into a calculator and it will give you the wrong answer. So what makes it difficult to get the correct answer? Most probably, the main culprit is the increased use of calculators in our lives. They are on every phone and hence makes it easy for students to rely on them to find an answer to a simple math problem like the one above. However, in this case, try to enter this simple question into a calculator and I bet, you will get the wrong answer 🙂 Is this really simple? In fact yes. This problem is really simple but if you know the Order of Operations, a math skill taught in the 3rd grade. If you don’t remember learning the order of operations in school, let’s revise it now. Order of Operations (PEMDAS): 1. Parenthesis ( ) 2. Exponents x^2 3. Multiplication / Division
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9328406453132629, "perplexity": 341.1747833352998}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607802.75/warc/CC-MAIN-20170524055048-20170524075048-00468.warc.gz"}
https://www.physicsforums.com/threads/two-capacitive-spheres-separated-by-dielectric-ratio-of-radii-for-lowest-electric-fi.621540/
Two capacitive spheres separated by dielectric, ratio of radii for lowest electric fi 1. Jul 17, 2012 Xinthose 1. The problem statement, all variables and given/known data It's desired to build a capacitor which has two concentric spheres separated by a dielectric of high permittivity, low loss, and high dielectric strength. Calculate the ratio of sphere b's radius to sphere a's radius which produces the lowest electric field between the spheres. Not sure how to start this one. Thank you for any help. 2. Jul 18, 2012 CWatters 3. Jul 18, 2012 Xinthose Re: Two capacitive spheres separated by dielectric, ratio of radii for lowest electri I do know, from Wikipedia, that concerning concentric spheres, Cap = 4 (pi) ε / ( (1 /a) - (1/b) ) 4. Jul 18, 2012 Nicholasc1988 Re: Two capacitive spheres separated by dielectric, ratio of radii for lowest electri So what Ive done on this problem so far is C=[4pi(epsilon nought)][(ab)/(b-a)] and V=(Q/[4pi(epsilon nought)])(b-a)/(ab) and plug it in Q=CV i just get Q=Q Conceptually from E=kq/r^2 as the radius goes up the E field goes down so the ratio from B to a would approach infinity or A should be much less then A? 5. Jul 18, 2012 CWatters Re: Two capacitive spheres separated by dielectric, ratio of radii for lowest electri The electric field E = V/d where d is the distance between the plates. 6. Jul 18, 2012 Xinthose Re: Two capacitive spheres separated by dielectric, ratio of radii for lowest electri Alright, but you eventually get E = Q / (4 * pi * ε * a * b) ; so how would you get a ratio from that from b to a ? 7. Jul 19, 2012 Xinthose Re: Two capacitive spheres separated by dielectric, ratio of radii for lowest electri You failed me Physics Forums; here is the scanned answer from my professor's solution set given to us after the test; I hope that this will help someone else out there; Make of it what you will; his handwriting is kind of hard to read http://i633.photobucket.com/albums/uu57/Xinthose/scan0002.jpg http://i633.photobucket.com/albums/uu57/Xinthose/scan0003.jpg [Broken] or if you prefer to see it on the forum page 1 page 2 Last edited by a moderator: May 6, 2017 Similar Discussions: Two capacitive spheres separated by dielectric, ratio of radii for lowest electric fi
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9330692887306213, "perplexity": 2649.1815280538017}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886103579.21/warc/CC-MAIN-20170817151157-20170817171157-00130.warc.gz"}
http://math.stackexchange.com/users/28869/robert-mastragostino?tab=activity&sort=all&page=2
Robert Mastragostino Reputation 9,738 Next privilege 10,000 Rep. Access moderator tools Apr11 awarded Yearling Apr11 answered $A \oplus C = B \oplus C$ but $A\neq B$ Apr6 comment Sketch the graph of the polynomial function $P(x)= x(x-3)(x+2)$ Your intervals have their ends at each of the roots. These are $-2, 0,$ and $3$. So yes, $x=0$ should be included as a boundary and you should test on either side of it to figure out whether the polynomial is positive or negative there. Apr6 comment Calculus on manifolds It's tedious in comparison. You can do it (by 'it' I mean set up an appropriate surface integral, not direct integration in $\mathbb{R}^3$ which doesn't work as Nicholas has pointed out), but you end up doing all the intrinsic calculations anyway. Adding other calculations and things to look out for on top of that doesn't make life easier in the long run. Apr6 answered Sketch the graph of the polynomial function $P(x)= x(x-3)(x+2)$ Mar22 answered Intuitive idea of axiom of choice Mar18 comment Application of Composition of Functions: Real world examples? A function is just a process that turns one thing into another thing. Anytime you're describing something that chains processes together one after the other you're composing functions. You find a probability distribution and then want to find its average. Find a particle's position as a function of time, and then its distance from its start point. Almost any time you want to do multiple things to a function you're composing it with other functions. Mar5 awarded calculus Feb12 comment What are the practical applications of the Taylor Series? @Ruslan you can use the symmetries and periodicity of $\sin(x)$ to restrict your calculation to $[0,\pi/2]$. It's this range that has the maximum error of $8\%$. Feb1 comment Are there any other purposes for variables in math other than functions? @tazheneryduck0 No, that doesn't make any sense. If you divide by $x$ you need to know that any results you get are only okay if $x\neq 0$, because otherwise the division wasn't allowed. But you can definitely divide by variables to find solutions to equations, it's used all the time. Your teacher seems to be confused. If $x$ is a root then $f(x)$ is what ends up being zero. That doesn't prevent you from dividing by $x$ at all. Jan31 answered Are there any other purposes for variables in math other than functions? Jan28 comment Why are particular combinations of algebraic properties “better” (richer and more pervasive) than others? While I don't have a full answer, I would like to point out that associativity is what allows you to move your focus around when solving equations. Commutative but non-associative operations only let you switch two things around, not several in a row, so you barely gain any freedom. Associativity without commutativity is much less restricting. Also function composition is associative, (regardless of formalism; any sensible definition would be) so any algebra that can be interpreted as a collection of transformations (which is a large number of them) has to be associative. Dec8 awarded Nice Answer Dec3 comment Explaining probability theory versus statistics Doesn't this assume that you can't take a Bayesian approach to statistics? Nov30 answered Nullspace that spans $\mathbb{R}^n$? Nov26 comment Proof that imaginary numbers exist? Do you mean "exist" as in "part of the 'real world'"? Or do you mean "how do you prove that you can consistently add square roots of negative numbers in a logically consistent way"? Nov18 revised Parametric form of a plane deleted 3 characters in body Nov18 comment Parametric form of a plane @AndyG Yes, it seems it should be. I'll update, thanks Nov13 awarded Nice Question Nov5 comment Is math built on assumptions? I really don't get this student's complaint, or your confusion with it. Why does "assume $x=5$" count as something more important than assumptions in any other hypothetical story? "assume we have a child in a red cape walking to grandma's house" is doing the exact same thing. It's not an assumption about one reality, but one needed to set up the story we want to tell.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6319227814674377, "perplexity": 598.0046605520101}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929272.10/warc/CC-MAIN-20150521113209-00217-ip-10-180-206-219.ec2.internal.warc.gz"}
http://ilovephilosophy.com/viewtopic.php?f=2&t=193845
## James S Saint This is the place to shave off that long white beard and stop being philosophical; a forum for members to just talk like normal human beings. ### James S Saint I have been wondering for a while now where James is. encode_decode Philosopher Posts: 1215 Joined: Tue Mar 14, 2017 4:07 pm ### Re: James S Saint Me too. I miss him here. ..... panta rhei ............................................. Mithus Posts: 210 Joined: Sun Mar 02, 2014 10:05 pm ### Re: James S Saint encode_decode wrote: I have been wondering for a while now where James is. :-k Uh oh... Wasn't he pretty ancient? Mitra-Sauwelios religious philosopher Posts: 120 Joined: Fri Oct 13, 2017 5:24 am ### Re: James S Saint What's considered ancient these days? 90+ yrs. old. Return JSS! I AM OFFICIALLY IN HELL! I live my philosophy, it's personal to me and people who engage where I live establish an unspoken dynamic, a relationship of sorts, with me and my philosophy. Cutting folks for sport is a reality for the poor in spirit. I myself only cut the poor in spirit on Tues., Thurs., and every other Sat. WendyDarling Heroine Posts: 7495 Joined: Sat Sep 11, 2010 8:52 am ### Re: James S Saint Mithus wrote:Me too. I miss him here. I also miss Arminius . or maybe its not an odd coincidence that they are both missing at the same time, they appeared to share views in a very supporting manner. Meno_ ILP Legend Posts: 6195 Joined: Tue Dec 08, 2015 2:39 am Location: Mysterium Tremendum ### Re: James S Saint Hope he is doing alright. "I'm sorry, but the lifestyle you've ordered that you've grown accustomed to is completely out of stock. Have a nice day! "-$$“Assuming one can never leave permanent social exile and alienation keep on living only to observe the total collapse of entire societies, nations, or civilizations where afterwards in the inevitable chaos revel in its total destruction taking satisfaction within it as a casual witness. Let it all burn and come crashing down in a festival or spectacle orgy of violence.”-Myself Zero_Sum Evil Neo-Nazi Extraordinaire. Posts: 3232 Joined: Thu Nov 30, 2017 7:05 pm Location: U.S.S.A- Newly lead Bolshevik Soviet block. Also known as Weimar America. ### Re: James S Saint He'll be back before I surpass his post count. That was his crowning achievement here, and he was pretty proud of it. Talking more shit than me is like legendary. You see...a pimp's love is very different from that of a square. Dating a stripper is like eating a noisy bag of chips in church. Everyone looks at you in disgust, but deep down they want some too. What exactly is logic? -Magnus Anderson Support the innocence project on AmazonSmile instead of Turd's African savior biker dude. http://www.innocenceproject.org/ Mr Reasonable resident contrarian Posts: 25953 Joined: Sat Mar 17, 2007 8:54 am Location: pimping a hole straight through the stratosphere itself ### Re: James S Saint Mr Reasonable wrote:He'll be back before I surpass his post count. That was his crowning achievement here, and he was pretty proud of it. That would probably be a good description of yourself, but it's somehow sad that after all this time you don't have the slightest idea about James. Leyla Posts: 96 Joined: Wed Nov 25, 2015 9:58 pm ### Re: James S Saint Leyla wrote: Mr Reasonable wrote:He'll be back before I surpass his post count. That was his crowning achievement here, and he was pretty proud of it. That would probably be a good description of yourself, but it's somehow sad that after all this time you don't have the slightest idea about James. This made me look for James at KTS on the off chance he was there. Don't think he is, but it turns out it's Satyr's fifty-second birthday! Mitra-Sauwelios religious philosopher Posts: 120 Joined: Fri Oct 13, 2017 5:24 am Location: Mad Master ### Re: James S Saint So send him a heart felt ecard. freakin auto correct...I hate you and the programmer who thought you up. I AM OFFICIALLY IN HELL! I live my philosophy, it's personal to me and people who engage where I live establish an unspoken dynamic, a relationship of sorts, with me and my philosophy. Cutting folks for sport is a reality for the poor in spirit. I myself only cut the poor in spirit on Tues., Thurs., and every other Sat. WendyDarling Heroine Posts: 7495 Joined: Sat Sep 11, 2010 8:52 am Location: Hades ### Re: James S Saint Leyla wrote: Mr Reasonable wrote:He'll be back before I surpass his post count. That was his crowning achievement here, and he was pretty proud of it. That would probably be a good description of yourself, but it's somehow sad that after all this time you don't have the slightest idea about James. Shut up bitch. You see...a pimp's love is very different from that of a square. Dating a stripper is like eating a noisy bag of chips in church. Everyone looks at you in disgust, but deep down they want some too. What exactly is logic? -Magnus Anderson Support the innocence project on AmazonSmile instead of Turd's African savior biker dude. http://www.innocenceproject.org/ Mr Reasonable resident contrarian Posts: 25953 Joined: Sat Mar 17, 2007 8:54 am Location: pimping a hole straight through the stratosphere itself ### Re: James S Saint More to the point, how does all of this factor into, among other things... The Real God ≡ The reason/cause for the Universe being what it is = "The situation cannot be what it is and also remain as it is". Or maybe it was something that I said. Another objectivist bites the dust? He was like a man who wanted to change all; and could not; so burned with his impotence; and had only me, an infinitely small microcosm to convert or detest. John Fowles Start here: viewtopic.php?f=1&t=176529 Then here: viewtopic.php?f=15&t=185296 And here: viewtopic.php?f=1&t=194382 iambiguous ILP Legend Posts: 34934 Joined: Tue Nov 16, 2010 8:03 pm Location: baltimore maryland ### Re: James S Saint I know that James was an older man [70's?] and we're all wondering if he has died passing on from this world. That's at least what me and Wendy are thinking anyways. If he has passed on from this earthly realm I hope he found peace in what this world could not offer him. We never agreed much on anything philosophically but he was an interesting man that was strong in his convictions where I respected that. RIP James S Saint. "Death is the last greatest adventure for us all." We younger generations will carry the torch that you dropped fighting against the New World Order James. "I'm sorry, but the lifestyle you've ordered that you've grown accustomed to is completely out of stock. Have a nice day! "-$$$“Assuming one can never leave permanent social exile and alienation keep on living only to observe the total collapse of entire societies, nations, or civilizations where afterwards in the inevitable chaos revel in its total destruction taking satisfaction within it as a casual witness. Let it all burn and come crashing down in a festival or spectacle orgy of violence.”-Myself Zero_Sum Evil Neo-Nazi Extraordinaire. Posts: 3232 Joined: Thu Nov 30, 2017 7:05 pm Location: U.S.S.A- Newly lead Bolshevik Soviet block. Also known as Weimar America. ### Re: James S Saint I sincerely hope that he merely takes a break from ILP because he has something better to do than posting here. ..... panta rhei ............................................. Mithus Posts: 210 Joined: Sun Mar 02, 2014 10:05 pm ### Re: James S Saint Mithus wrote:I sincerely hope that he merely takes a break from ILP because he has something better to do than posting here. That's what I thought also originally but we're talking about a man that before his disappearance was on here more than any other member concerning hours of interaction. I'm thinking it is for the worse. "I'm sorry, but the lifestyle you've ordered that you've grown accustomed to is completely out of stock. Have a nice day! "-$$“Assuming one can never leave permanent social exile and alienation keep on living only to observe the total collapse of entire societies, nations, or civilizations where afterwards in the inevitable chaos revel in its total destruction taking satisfaction within it as a casual witness. Let it all burn and come crashing down in a festival or spectacle orgy of violence.”-Myself Zero_Sum Evil Neo-Nazi Extraordinaire. Posts: 3232 Joined: Thu Nov 30, 2017 7:05 pm Location: U.S.S.A- Newly lead Bolshevik Soviet block. Also known as Weimar America. ### Re: James S Saint Is he really 70 years old? You see...a pimp's love is very different from that of a square. Dating a stripper is like eating a noisy bag of chips in church. Everyone looks at you in disgust, but deep down they want some too. What exactly is logic? -Magnus Anderson Support the innocence project on AmazonSmile instead of Turd's African savior biker dude. http://www.innocenceproject.org/ Mr Reasonable resident contrarian Posts: 25953 Joined: Sat Mar 17, 2007 8:54 am Location: pimping a hole straight through the stratosphere itself ### Re: James S Saint Mithus wrote:I sincerely hope that he merely takes a break from ILP because he has something better to do than posting here. From what I can tell from searching his posts, he hasn't taken a break since May 2011. Serendipper Philosopher Posts: 2180 Joined: Sun Aug 13, 2017 7:30 pm ### Re: James S Saint Mr Reasonable wrote:Is he really 70 years old? Same question. Magnus Anderson Philosopher Posts: 4114 Joined: Mon Mar 17, 2014 7:26 pm ### Re: James S Saint Magnus Anderson wrote: Mr Reasonable wrote:Is he really 70 years old? Same question. Probably. I thought he was in his late 60's but he could have made it to 70 by now. It's not looking good anyway. If he was taken to hospital around the time of his last post (Jan 6th) I'd expect him to have been released by now... unless he's in a coma... or worse... I'm also concerned about KrisWest. Her husband was very ill a little while back and Kris freaked out at the prospect of losing him. I've seen her log in recently but then she logged out without saying anything. I hope she's OK... Chakra Superstar Philosopher Posts: 1038 Joined: Sat Apr 07, 2012 10:42 am ### Re: James S Saint Chakra Superstar wrote:It's not looking good anyway. If he was taken to hospital around the time of his last post (Jan 6th) I'd expect him to have been released by now... unless he's in a coma... or worse... Or stroke. I doubt he had much family since he spent all his time on here. How would he even get to the hospital after a cardiac event? It's a sad situation to contemplate and leaves me with some regrets. Serendipper Philosopher Posts: 2180 Joined: Sun Aug 13, 2017 7:30 pm ### Re: James S Saint I hope he shows up and has a good laugh at the claim that he is in his 70s and dead. I had the impression that he is somewhere between 30 and 50 and nowhere near close to death. Any evidence to back up your claims? Magnus Anderson Philosopher Posts: 4114 Joined: Mon Mar 17, 2014 7:26 pm ### Re: James S Saint Mr Reasonable wrote:Is he really 70 years old? That's what I've heard from others here. "I'm sorry, but the lifestyle you've ordered that you've grown accustomed to is completely out of stock. Have a nice day! "-$$$ “Assuming one can never leave permanent social exile and alienation keep on living only to observe the total collapse of entire societies, nations, or civilizations where afterwards in the inevitable chaos revel in its total destruction taking satisfaction within it as a casual witness. Let it all burn and come crashing down in a festival or spectacle orgy of violence.”-Myself Zero_Sum Evil Neo-Nazi Extraordinaire. Posts: 3232 Joined: Thu Nov 30, 2017 7:05 pm Location: U.S.S.A- Newly lead Bolshevik Soviet block. Also known as Weimar America. ### Re: James S Saint Magnus Anderson wrote:I hope he shows up and has a good laugh at the claim that he is in his 70s and dead. I had the impression that he is somewhere between 30 and 50 and nowhere near close to death. Any evidence to back up your claims? If I'm wrong I'm wrong but if not.... "I'm sorry, but the lifestyle you've ordered that you've grown accustomed to is completely out of stock. Have a nice day! "-\$ “Assuming one can never leave permanent social exile and alienation keep on living only to observe the total collapse of entire societies, nations, or civilizations where afterwards in the inevitable chaos revel in its total destruction taking satisfaction within it as a casual witness. Let it all burn and come crashing down in a festival or spectacle orgy of violence.”-Myself Zero_Sum Evil Neo-Nazi Extraordinaire. Posts: 3232 Joined: Thu Nov 30, 2017 7:05 pm Location: U.S.S.A- Newly lead Bolshevik Soviet block. Also known as Weimar America. ### Re: James S Saint Zero_Sum wrote: Mr Reasonable wrote:Is he really 70 years old? That's what I've heard from others here. Maybe Fixed knows. Think they met once. Magnus Anderson Philosopher Posts: 4114 Joined: Mon Mar 17, 2014 7:26 pm ### Re: James S Saint Fixed hasn't been here for hours. Chakra Superstar Philosopher Posts: 1038 Joined: Sat Apr 07, 2012 10:42 am Next
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5062369704246521, "perplexity": 8813.4875204334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370510846.12/warc/CC-MAIN-20200403092656-20200403122656-00366.warc.gz"}
http://mathhelpforum.com/advanced-algebra/121213-proof-involving-inequalities.html
1. ## Proof involving inequalities Let $a,b \in \mathbb{R}$. If $a \le b_{1}$ for every $b_{1} > b$, then $a \le b$. I'm fairly certain I have to apply some of the ordered field properties but I am not sure how. Thanks! 2. Suppose $a>b$; let $a-b=\epsilon$. Let $b_1=b+\epsilon/2$. Then $b_1 >b$ so by assumption $a \leq b_1$. So $a \leq b+\epsilon/2 =b+\frac{a-b}{2} = \frac{a+b}{2} < a$, which is absurd. 3. So the contradiction you arrived at is $a > a$, if I'm not mistaken. Thanks! 4. That is correct! There may be a more abstract approach but this is pretty straightforward. 5. Originally Posted by Pinkk Let $a,b \in \mathbb{R}$. If $a \le b_{1}$ for every $b_{1} > b$, then $a \le b$. I'm fairly certain I have to apply some of the ordered field properties but I am not sure how. Thanks! http://www.mathhelpforum.com/math-he...her-proof.html. Look familiar? Clearly, if it is true for every $b_1>b$ then it is true for the above post.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9952408671379089, "perplexity": 144.0928349275479}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00194-ip-10-171-10-70.ec2.internal.warc.gz"}
https://web2.0calc.com/questions/help_80532
+0 # Help! 0 270 1 +1211 The fourth degree polynomial equation $$x^4 - 7x^3 + 4x^2 + 7x - 4 = 0$$ has four real roots, a, b, c, and d. What is the value of the sum $$\frac{1}{a}+\frac{1}{b}+\frac{1}{c}+\frac{1}{d}$$? Express your answer as a common fraction. Jun 4, 2019 #1 +6193 +1 $$a b c d = c_0 = -4\\ bcd+acd+abd+abc = -(c_1) = -7\\ \dfrac 1 a + \dfrac 1 b+\dfrac 1 c+\dfrac 1 d = \\\dfrac{bcd+acd+abd+abc}{a b c d } = \dfrac{-7}{-4}=\dfrac 7 4$$ . Jun 4, 2019 $$a b c d = c_0 = -4\\ bcd+acd+abd+abc = -(c_1) = -7\\ \dfrac 1 a + \dfrac 1 b+\dfrac 1 c+\dfrac 1 d = \\\dfrac{bcd+acd+abd+abc}{a b c d } = \dfrac{-7}{-4}=\dfrac 7 4$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8574908971786499, "perplexity": 786.7140601906285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886802.13/warc/CC-MAIN-20200704232817-20200705022817-00559.warc.gz"}
http://www.scottaaronson.com/blog/?p=613
## Better late than never No, I’m not talking about Osama, but about my reactions below to a New Yorker article about quantum computing—reactions whose writing was rudely interrupted by last night’s news.  Of all the possible times in the past decade to get him, they had to pick one that would overshadow an important Shtetl-Optimized discussion about complexity theory, the Many-Worlds Interpretation, and the popularization of science?  Well, I guess I’ll let it slide. As already discussed on Peter Woit’s blog, this week’s New Yorker has a long piece about quantum computing by the novelist Rivka Galchen (unfortunately the article is behind a paywall).  Most of the article is about the quantum computing pioneer David Deutsch: his genius, his eccentricity, his certainty that parallel universes exist, his insistence on rational explanations for everything, his disdain for “intellectual obfuscators” (of whom Niels Bohr is a favorite example), his indifference to most of the problems that occupy other quantum computing researchers, the messiness of his house, his reluctance to leave his house, and his love of the TV show House. Having spent a wonderful, mind-expanding day with Deutsch in 2002—at his house in Oxford, of course—I can personally vouch for all of the above (except the part about House, which hadn’t yet debuted then).  On the one hand, Deutsch is one of the most brilliant conversationalists I’ve ever encountered; on the other hand, I was astonished to find myself, as a second-year graduate student, explaining to the father of quantum computing what BQP was.  So basically, David Deutsch is someone who merits a New Yorker profile if anyone does.  And I was pleased to see Galchen skillfully leveraging Deutsch’s highly-profilable personality to expose a lay audience (well, OK, a chardonnay-sipping Manhattan socialite audience) to some of the great questions of science and philosophy. However, reading this article also depressed me, as it dawned on me that the entire thing could have been written fifteen years ago, with only minor changes to the parts about experiment and zero change to the theoretical parts.  I thought: “has there really been that little progress in quantum computing theory the past decade and a half—at least progress that a New Yorker reader would care about?”  Even the sociological observations are dated: Galchen writes about interest in quantum computing as the “Oxford flu,” rather than the “Waterloo flu” or “Caltech flu” that it’s been since 2000 or so (the latter two capitals of the field aren’t even mentioned!).  A good analogy would be an article about the Web, published today, that described the strange and exciting new world of Netscape, HotBot, and AltaVista. A more serious issue is that the article falls victim to almost every misleading pop-science trope about quantum computing that some of us have trying to correct for the past decade.  For example: With one millionth of the hardware of an ordinary laptop, a quantum computer could store as many bits of information as there are particles in the universe. Noooooo!  That’s only for an extremely strange definition of “store”… Oxford’s eight-qubit quantum computer has significantly less computational power than an abacus, but fifty to a hundred qubits could make something as powerful as any laptop. Noooooo!  Fifty to a hundred qubits could maybe replace your laptop, if the only thing you wanted to use your laptop for was simulating a system of fifty to a hundred qubits… In a 1985 paper, Deutsch pointed out that, because Turing was working with classical physics, his universal computer could imitate only a subset of possible computers.  Turing’s theory needed to account for quantum mechanics if its logic was to hold.  Deutsch proposed a universal quantum computer based on quantum physics, which would have calculating powers that Turing’s computer (even in theory) could not simulate. There are at least three problems here.  The first is conflating simulation with efficient simulation.  At the risk of going hoarse, a classical Turing machine can calculate absolutely everything that a quantum computer can calculate! It might “merely” need exponentially more time.  Second, no one has proved that a classical Turing machine really does need exponentially more time, i.e., that it can’t efficiently simulate a quantum computer.  That remains a (deep, plausible, and widely-believed) conjecture, which will take enormous mathematical advances to resolve.  And third, Deutsch’s landmark paper wasn’t among the ones to give evidence for that conjecture.  The first such evidence only came later, with the work of Bernstein-Vazirani, Simon, and Shor. To be fair to Galchen, Deutsch himself has often been inaccurate on these points, even though he ought to (and does!) know better.  Specifically, he conflates the original Church-Turing Thesis (which isn’t challenged in the slightest by quantum computing) with its modern, polynomial-time version (which is), and he neglects to mention the conjectural status of quantum computers’ speedup.  Here are two examples out of many, from The Fabric of Reality: “quantum computers can perform computations of which no (human) mathematician will ever, even in principle, be capable.” “if the visible universe were the extent of physical reality, physical reality would not even remotely contain the resources required to factorize such a large number.” Am I just harping over technicalities here?  In my view, the issue goes deeper.  All of the above oversights can be understood as symptoms of complexophobia: the fear of acknowledging that one is actually making statements about computational complexity theory.  Again and again, I’ve seen science writers go through strange verbal contortions to avoid the question of how anyone could know that a computation inherently requires a huge amount of time—as if the reader must be prevented, at all costs, from seeing such a claim as anything other than obvious.  It can be fascinating to watch, in the same way it’s fascinating to watch a politician discuss (say) Confederate History Month without mentioning slavery.  How long can you poke and prod the P versus NP beast without rousting it? On the other hand, complexity theory does show up in Galchen’s article, and in an extremely interesting context: that of explaining where Deutsch got the idea for quantum computing. According to Deutsch, the insight for [his famous 1985 paper] came from a conversation in the early eighties with the physicist Charles Bennett, of I.B.M., about computational-complexity theory, at the time a sexy new field that investigated the difficulty of a computational task. Is “at the time” meant to imply complexity theory is no longer sexy, or merely that it’s no longer new?  Leaving that aside… Mass, for instance, is a fundamental property, because it remains the same in any setting; weight is a relative property, because an object’s weight depends on the strength of gravity acting on it … If computational complexity was like mass—if it was a relative property—then complexity was quite profound; if not, then not. “I was just sounding off,” Deutsch said.  “I said they make too much of this”—meaning complexity theory—“because there’s no standard computer with respect to which you should be calculating the complexity of the task.”  Just as an object’s weight depends on the force of gravity in which it’s measured, the degree of computational complexity depended on the computer on which it was measured.  One could find out how complex a task was to perform on a particular computer, but that didn’t say how complex a task was fundamentally, in reference to the universe … Complexity theorists, Deutsch reasoned, were wasting their time. The tale continues with Bennett pointing out that the universe itself could be taken to be the “fundamental computer,” which leads Deutsch to the shocking realization that the complexity theorists weren’t complete morons. Sure, they had a silly theory where all the answers depended on which computer you chose (which somehow none of them ever noticed), but luckily, it could be fixed by the simple addition of quantum mechanics! Over the anguished howls of my classical complexity-theorist friends, I should point out that this story isn’t completely false.  There’s no denying that merging quantum mechanics with theoretical computer science was a major advance in human knowledge, and that the people who first had the idea to merge the two were not computer scientists, but physicists like Deutsch and Feynman (the latter’s role is completely left out of Galchen’s story). But complexity theory wasn’t so much a flawed early attempt at quantum computing as an essential prerequisite to it: the thing that made it possible to articulate how quantum computers might differ from classical computers in the first place.  Indeed, it occurs to me that Deutsch and Bennett’s conversation provides the key to resolving a puzzle discussed in the article: “Quantum computers should have been invented in the nineteen-thirties,” [Deutsch] observed near the end of our conversation.  “The stuff that I did in the late nineteen-seventies and early nineteen-eighties didn’t use any innovation that hadn’t been known in the thirties.”  That is straightforwardly true.  Deutsch went on, “The question is why.” I used to go around saying the same thing: “someone like John von Neumann could have easily invented quantum computing in the 1930s, had he just put the pieces together!”  But I now suspect this view is a mistake, the result of projecting what’s obvious today onto a much earlier era.  For there’s at least one essential ingredient for quantum computing that wouldn’t enter scientific consciousness until the 1970s or so: complexity theory, and particularly the distinction between polynomial and exponential time. Over the years, I’ve developed what I call the Minus-Sign Test, a reliable way to rate popularizations of quantum mechanics.  To pass the Minus-Sign Test, all a popularization needs to do is mention the minus signs: i.e., interference between positive and negative amplitudes, the defining feature of quantum mechanics, the thing that makes it different from classical probability theory, the reason why we can’t say Schrödinger’s cat is “really either dead or alive,” and we simply don’t know which one, the reason why the entangled particles can’t have just agreed in advance that one would spin up and the other would spin down.  Another name for the Minus-Sign Test is the High-School Student Test, since it’s the thing that determines whether a bright high-school student, meeting quantum mechanics for the first time through the popularization, would come away thinking of superposition as (a) one of the coolest discoveries about Nature ever made, or (b) a synonym used by some famous authority figures for ignorance. Despite the low bar set by the Minus-Sign Test, I’m afraid almost every popular article about quantum mechanics ever written has failed it, the present piece included. Reading Not Even Wrong, I was surprised at first that the discussion centered around Deutsch’s argument that quantum computing proves the existence of Many Worlds.  (More precisely, Deutsch’s position is that Many Worlds is an established fact with or without quantum computing, but that for those who are too dense or stubborn to see it, a working quantum computer will be useful for hitting them over the head.) As others pointed out: yes, the state of the universe as described by quantum mechanics is a vastly, exponentially bigger thing than anything dreamt of in classical physics; and a scalable quantum computer would be dramatic evidence that this exponentiality is really “out there,” that it’s not just an artifact of our best current theory.  These are not merely truths, but truths worth shouting from the rooftops. However, there’s then the further question of whether it’s useful to talk about one quantum-mechanical universe as an exponential number of parallel semi-classical universes.  After all, to whatever extent the branches of a superposition successfully contribute to a quantum computation, to that extent they’re not so much “parallel universes” as one giant, fault-tolerantly-encoded, self-interfering blob; and to whatever extent those branches do look like parallel universes, to that extent they’re now forever out of causal contact with each other—the branches other than our own figuring into our explanations for observable events only in the way that classical counterfactuals figure in. Anyway, I thought: does anyone still care about these issues?  Wasn’t every possible argument and counterargument explored to death years ago? But this reaction just reveals my personal bias.  Sometime in graduate school, I realized that I was less interested in winning philosophical debates than in discovering new phenomena for philosophers to debate about.  Why brood over the true meaning of (say) Gödel’s Theorem or the Bell Inequality, when there are probably other such worldview-changing results still to be found, and those results might render the brooding irrelevant anyway?  Because of this attitude, I confess to being less interested in whether Many-Worlds is true than in whether it’s scientifically fruitful.  As Peter Shor once memorably put it on this blog: why not be a Many-Worlder on Monday, a Bohmian on Tuesday, and a Copenhagenist on Wednesday, if that’s what helps you prove new theorems? Ironically, this attitude seems to me to mesh well with Deutsch’s own emphasis on explanation as the goal of science.  Ask not whether the parallel universes are “really there,” or whether they should really be called “parallel universes”—ask what explanatory work they do for you!  (That is, over and above the explanatory work that QM itself already does for you, assuming you accept it and know how to use it.) So for me, the single strongest argument in favor of Many-Worlds is what I call the “Deutsch argument”: Many-Worlds is scientifically fruitful, because it led David Deutsch to think of quantum computing. This argument carries considerable force for me.  On the other hand, if we accept it, then it seems we should also accept the following argument: Bohmian mechanics is scientifically fruitful, because it led John Bell to think of the Bell inequality. Furthermore, consider the following facts: David Deutsch is a brilliant, iconoclastic theoretical physicist, who thought deeply about quantum foundations at a time when it was unfashionable to do so.  His extraordinary (and not wholly-unjustified!) self-confidence in his own powers of reasoning has led to his defending not one but many heterodox ideas. Is it possible that these facts provide a common explanation for Deutsch’s certainty about Many-Worlds and his pioneering role in quantum computing, without our needing to invoke the former to explain the latter? Let me end with a few miscellaneous reactions to Galchen’s article. Physics advances by accepting absurdities.  Its history is one of unbelievable ideas proving to be true. I’d prefer to say the history of physics is one of a vast number of unbelievable ideas proving to be false, and a few specific unbelievable ideas proving to be true—especially ideas having to do with the use of negative numbers where one might have thought only positive numbers made sense. [Robert Schoelkopf and his group at Yale] have configured their computer to run what is known as a Grover’s algorithm, one that deals with a four-card-monte type of question: Which hidden card is the queen?  It’s a sort of Shor’s algorithm for beginners, something that a small quantum computer can take on. No, small quantum computers can and have taken on both Shor’s and Grover’s algorithms, solving tiny instances in each case.  The real difference between Shor’s and Grover’s algorithms is one that complexophobia might prevent Galchen from mentioning: Shor gives you a (conjectured) exponential speedup for some highly-specific problems (factoring and discrete log), while Grover gives you “merely” a quadratic speedup, but for a much wider class of problems. “Look,” [Deutsch] went on, “I can’t stop you from writing an article about a weird English guy who thinks there are parallel universes.  But I think that style of thinking is kind of a put-down to the reader.  It’s almost like saying, If you’re not weird in these ways, you’ve got no hope as a creative thinker.  That’s not true.  The weirdness is only superficial.” This was my favorite passage in the article. ### 113 Responses to “Better late than never” 1. Carl Says: I like that I’m a grad student in Comparative Philosophy (that is to say, I live in the land of woo-woo), and I could identify what was wrong with the blockquoted portions of the article before I read your commentary. 🙂 I guess that this blog is doing something useful. 2. Scott Says: Thanks, Carl! In the future, maybe I should just quote the troubling passages and leave the commentary on them to the commenters? 🙂 3. jb Says: In Yuri Manin’s collection of essays “Mathematics as Metaphor”, he mentions somewhere that he was talking about the possibility of quantum computers around the time that Feynman first did. He might have written a paper in Russian about it back then. I can’t remember, but it’s probably not reading too much into things to say that he thinks he deserves some credit. You might want to look into that. It wouldn’t be surprising. 4. Scott Says: jb: Yes, the Nielsen & Chuang textbook (for example) talks about Manin, and several other people who had QC ideas around the same time as Deutsch and Feynman or even earlier. I can think of few better illustrations for the old saying that what counts isn’t being the first person to originate an idea, but the last one! 🙂 And, as I’ve seen again and again in my own career, that usually requires not just (e.g.) mentioning the idea in a paper about something else, published in some venue that none of the relevant people read, but really hammering the idea home (or being lucky enough that other people do that for you, while giving you credit). It’s not enough to wander the land; you’ve got to settle and build a city on it. 5. Mitchell Porter Says: The real question is: Why wasn’t the distinction between polynomial time and exponential time discovered before the 1970s? 6. Scott Says: Mitchell: Good question! The short answer is that it was, many times, but without a whole field of algorithm design (motivated by actually-existing computers), people didn’t grasp its importance. (Gödel certainly understood the distinction in his 1956 letter to von Neumann, but he never published about it. In the 1960s, Edmonds, Hartmanis-Stearns, and Cobham did publish their seminal papers explaining, defending, and applying the distinction. But at least as I understand it, it wasn’t until the discovery of NP-completeness in the 1970s that the distinction started filtering into the wider scientific world.) 7. Mike Says: An excellent post Scott. Insightful, balanced and very informative — no hint at all of the grumpiness sometimes found elsewhere when these topics are discussed. 8. Yatima Says: So….a Letter to the Editor? Best case scenario, Science will have been popularized in an intelligent manner. Worst case scenario, you might bore the New Yorker readership out of their Armanis. 9. Scott Says: So….a Letter to the Editor? Eh, I already said what I wanted to here. And there’s too much to fit into a letter. 10. Moshe Says: One of the things that bothers me about Deutsch’s way of thinking is the following. One of the superficial differences between classical and quantum mechanics is that the easiest, most accessible formalism of either one of them is different. Classical mechanics can be (but doesn’t have to be) formulated directly in terms of observable quantities obeying certain differential equations. In quantum mechanics you are forced into using redundant variables encoded (for example) in the wavefunction or density matrix. You then have a prescription of extracting from all this redundant information the very small subset of it corresponding to the physical information. It seems to me that it is precisely this redundancy that is so celebrated by Deutsch as the force of quantum mechanics. But it is not – to compare meaningfully any aspect of quantum and classical mechanics (e.g. computational power) you’d have to use similar formalisms to distinguish language from content. I suspect then that you can formulate (clumsily and redundantly) purely classical mechanics in terms of redundant variables such as the wavefunction, and then you’d have yourself exponential speedup a la Deutsch. In other words, I don’t see any aspect of actual QM as a dynamical theory that comes into that type of reasoning. Or, maybe I am missing something… 11. Kevin C Says: Could you post a link to further information on the “Minus-Sign Test” and/or popularizations that pass it? I’ve read quite a lot about quantum mechanics from a lay person level, yet I’m not sure I understand what you are talking about with this test, so I am afraid that perhaps it is a portion of QM that I do not understand. 12. T. Says: That’s a great post, thanks Scott. It’s rather astounding to read that Deutsch has been conflating hypotheses and claiming things as facts that really are not. Just by itself, the statement that quantum computers could solve problems that are beyond the scope of any classical computer has been so widely publicized that it deserves massive debunking. Maybe this needs to be promoted as another subtitle below “Shtetl-Optimized”? 13. Joshua Zelinsky Says: Ironically, this attitude seems to me to mesh well with Deutsch’s own emphasis on explanation as the goal of science. Ask not whether the parallel universes are “really there,” or whether they should really be called “parallel universes”—ask what explanatory work they do for you! (That is, over and above the explanatory work that QM itself already does for you, assuming you accept it and know how to use it.) So for me, the single strongest argument in favor of Many-Worlds is what I call the “Deutsch argument”: Many-Worlds is scientifically fruitful, because it led David Deutsch to think of quantum computing. The point about whether an idea is fruitful seems distinct from Deutsch’s notion of explanations being important. Fruitfulness is generally what was emphasized by Lakatos. 14. Scott Says: Moshe #10: I suspect then that you can formulate (clumsily and redundantly) purely classical mechanics in terms of redundant variables such as the wavefunction, and then you’d have yourself exponential speedup a la Deutsch. That’s where you’re mistaken, unless I’m misunderstanding you. The (conjectured) exponential speedup of Shor’s algorithm is not some artifact of the way quantum states are represented—if QM is correct, then it’s a real, actual phenomenon. On the other hand, I agree that thinking about the wavefunction “realistically” (as an exponentially-large classical object) seems to be a mistake that countless popular writers make, which then leads them to believe that quantum computers can solve black-box search problems instantaneously, store exponentially-many classical bits, and do other things that they’re known not to be able to do. 15. Scott Says: Kevin C #11: Could you post a link to further information on the “Minus-Sign Test” and/or popularizations that pass it? I’ve read quite a lot about quantum mechanics from a lay person level, yet I’m not sure I understand what you are talking about with this test, so I am afraid that perhaps it is a portion of QM that I do not understand. I’m sorry to break the news that whatever popularizations you read failed you: the minus signs aren’t just a “portion” of QM, they’re the entire thing that distinguishes QM from classical probability theory! Here are some resources to get you started: The wonderful book From Eternity to Here by Sean Carroll (which contains one of the only popular-level discussions of QM I know about to pass the Minus-Sign Test with flying colors) Pretty much anything by Feynman (e.g., “QED” or “The Character of Physical Law”) The Square Root of NOT by Brian Hayes Let me know if you read those and want more. 🙂 16. Moshe Says: Scott, that is my point – the exponential speedup that actually does exist seems to me unrelated to the point of the formalism that motivates Deutsch. Rather, it uses facts about quantum mechanics that are inherently quantum, and not just its representation as exponentially large amount of unobservable information. Such representation can exists for any theory, in fact it ought to exist for classical mechanics as well. What facts are those is a fascinating question, but I think the wavefunction having many branches is a red herring. 17. Scott Says: Moshe: The way I’d put it is (1) the wavefunction has exponentially many branches—so far, no distinction at all between QM and classical probability theory, but then (2) the amplitudes can be negative—and that, combined with (1), is what makes exponential quantum speedups possible. 18. Scott Says: T. #12: Just by itself, the statement that quantum computers could solve problems that are beyond the scope of any classical computer has been so widely publicized that it deserves massive debunking. Maybe this needs to be promoted as another subtitle below “Shtetl-Optimized”? Done! 🙂 Thanks for the suggestion. 19. Mike Says: “and can be simulated classically with exponential slowdown” 😉 Perfect. Exactly the point! 20. Scott Says: Joshua #13: The point about whether an idea is fruitful seems distinct from Deutsch’s notion of explanations being important. Fruitfulness is generally what was emphasized by Lakatos. For me, whether an idea is fruitful is a test of whether it has explanatory power. I have a hard time thinking of any example of a satisfying explanation that doesn’t open up further avenues for research. 21. Moshe Says: Scott, we both seem to agree that (1) is not enough, which is my main objection to (at least vulgarizations of) Deutsch intuition of “parallel” computations. I guess this is what you mean by the minus sign test… But, it is not clear to me that (2) is sufficient, or the essential point. The fact is that only specific problems, whose structure is used in specific ways, enjoy the speedup. So, any “generic” explanation will have to give a reason why this doesn’t work for generic problems. 22. Moshe Says: Or to cast it in a language you may sympathize with – I’d be suspicious of any explanation of facts about complexity of quantum computation that does not use actual complexity theory in an essential way. 23. Scott Says: Moshe: I certainly sympathize! But I think it’s useful to distinguish (1) explaining a computational model itself, and (2) explaining how interesting or nontrivial algorithms work in that model, or the impossibility results that can be proved about the model. I take it for granted that (2) is inherently complicated—so in particular, I don’t expect popular articles about quantum computing to be able to do it! I’d happily settle for popular articles doing (1) (which of course, most of them already fail at). 24. Moshe Says: Yeah, I agree, but my comment was not so much about the article, but about the question being discussed in that article: what facts about quantum mechanics are the essence of the difference you get in computational power (a vague question if there ever was one). The explanation given there (and the modification in your comment, to a lesser degree) seems to me misleading because it applies universally, it suggests speedup is a generic feature following solely from using the quantum mechanical formalism. The real answer is probably more intricate, if you have a reference for an interested outsider with something more convincing I’d be interested to read about it. 25. Mike Says: Moshe, I don’t think that a “speedup is a generic feature following solely from using the quantum mechanical formalism.” I think the key word is “solely”. On a prior post: http://www.scottaaronson.com/blog/?p=208 Scott said in part: ” . . . if we want a fast quantum factoring algorithm, we’re going to have to exploit some structure in the factoring problem: in other words, some mathematical property of factoring that it doesn’t share with just a generic problem of finding a needle in a haystack.” Not sure if this is what your questions was drivingn at. 26. Moshe Says: Yeah, I’m committing the ancient sin of preaching to the choir. Thanks for reminding me of that post, this is probably what I need. 27. Mike Says: Moshe, I’m really out of my depth here, but perhaps in addition to the potential advantages inherent in quantum mechanics, each type of problem (factoring, simulation, etc.) must have its own peculiar exploitable feature in order for the (plausibly conjectured) speed up to occur. Or, maybe not. Perhaps someone else has a pithy answer to this. 28. Dave Bacon Says: “And third, Deutsch’s landmark paper wasn’t among the ones to give evidence for that conjecture. The first such evidence only came later, with the work of Bernstein-Vazirani, Simon, and Shor.” Well I’m not sure if I agree with you there. Deutsch definitely doesn’t show anything that would past muster as a computational complexity result, but he does show in that paper a small computational task where quantum and classical differ and he explicitly asks the question of whether this implies that that computational complexity depends on whether you have a quantum or classical computer. I’d count this as “evidence.” (And also I think your list should include Deutsch-Jozsa!) As an interesting side note, Turing apparently was very interested in quantum theory and even in its foundations. In fact he discovered what we now call the quantum Zeno effect. I always like to think that had Turing lived longer we would have found quantum computers earlier (or in other words, Turing would be working in quantum computing had he lived today 🙂 ) 29. Mike Says: “I always like to think that had Turing lived longer we would have found quantum computers earlier (or in other words, Turing would be working in quantum computing had he lived today.)” Well, Deutsch of course would say that he did 😉 30. HDB Says: This response to the New Yorker article was very, very good! As a big fan of both the New Yorker and this blog I strongly encourage you to write in to them despite the lengthiness of your comments. 31. Maya Incaand Says: Why do I have this feeling that there is a lot of closet mathematicians around here? 32. Mark Probst Says: I don’t think what you’re calling “scientifically fruitful” has anything to do with Deutsch’s (Popper’s) notion of “explanation”. If, to take an example from Deutsch’s new book “Beginning of Infinity”, you had a good idea while thinking about the seasons being caused by some private affair between Greek gods, you’d call it “scientifically fruitful”, but I don’t think anybody would call it a good explanation. In that sense I’m not even sure about what your point is in that part of the post. 33. Blake Stacey Says: Chris Fuchs once told the story that he went around asking people who had made notable discoveries in quantum computation whether they had been inspired by Everett’s interpretation of quantum mechanics. One — I think it was Simon — wrote back, “Who is Everett and what is his ‘interpretation’?” 34. Scott Says: Mark Probst: I agree that it’s by no means necessary that something that’s scientifically fruitful also have explanatory power. My claim is simply that empirically, the converse statement holds: something that’s not scientifically fruitful is exceedingly unlikely to have explanatory power. Do you have a good countetexample (besides MWI, of course 🙂 )? 35. Scott Says: Dave Bacon #28: Sorry, I guess we differ here … if all I knew was Deutsch-Jozsa, I wouldn’t be convinced that anything of complexity-theoretic significance was going on. Whereas Bernstein-Vazirani would perk my interest. (Of course, in retrospect, complexity theorists “should have” leapt at Deutsch-Jozsa earlier — but Deutsch and Jozsa also “should have” pursued an asymptotic gap! Oh well, it only took a few years from there to Shor’s algorithm anyway.) 36. Blake Stacey Says: OK, having said that, I had to go look. I think I was recalling it from PIRSA:07090068, where he quotes Jozsa as saying, I’ve known of Everett interpretation since the mid 1970’s and never really adopted/liked it, even from outset. It always was (and still is) a very vague and incomplete framework to me. … I’m not aware that the Everett ideas have ever played any significant role in my thinking on quantum things. I don’t have a clear impression of any particular imagery that I could name, underlying or guiding my quantum thoughts … I do not see that any quantum comp/info developments particularly support the Everett view in any way compared to any other prospective interpretations. Shor said that he knew of Everett before starting work on his factoring algorithm, but that “the idea was really more to use periodicity, and inspired by Simon’s algorithm.” To which the punchline is Simon saying, “Who’s Everett, and what’s his interpretation?” 37. Scott Says: Mike #27: Yes, that’s well-expressed. The combination of superposition and interference is what can lead to quantum speedups for certain problems. But figuring out which problems you can actually get a quantum speedup for, and how much speedup, and by which problem-specific techniques, is an entire field of study. To whatever extent you can find general principles underlying the answers to such questions, you’ll revolutionize the field! 38. roland Says: It’s nice to see that you haven’t lost your essayistic prowess, Scott. There has been another CS related piece in the New Yorker lately which I found much worse than Galchen’s. Adam Gopnik thoughts on AI were quite shallow and error riddled. Well, I love the magazine nevertheless. (Although i don’t belong to the target audience of chardonnay-sipping Manhattan socialites) 39. Blake Stacey Says: I like the amended blog tagline. (-: 40. Mike Says: Scott, “To whatever extent you can find general principles underlying the answers to such questions, you’ll revolutionize the field!” Well, I’ll get to work on that right away 🙂 41. Lenny Sands Says: Scott, do you have an agent? Because I bet you could write a killer pop science book on complexity theory and quantum computers. Get that puppy optioned for a PBS series, make you a couple million, go be as eccentric as you want. With a little work you could make Howard Hughes look as straight-laced as Mr. Cleaver. 42. Greg Kuperberg Says: complexophobia – I thought that it’s fear of complex numbers. Which I suppose sometimes comes to the same thing as your definition. Also, physics advances by TAMING absurdities. Which is evidently the opposite of what the New Yorker sometimes does. 43. Gil Kalai Says: There is an interesting remark Scott have made (to ‘sp’) a few years ago (in the post “scientific americam article is out”) which mention the connection between impossibility of(computationally superior) quantum computers and David Deutsch’s argument for the many-worlds interpretation. “If quantum computing is impossible, I would say that quantum mechanics as physicists have taught and understood it for 80 years is wrong. Think about it this way: if quantum mechanics can’t be used to efficiently sample any probability distribution beyond what a classical computer can efficiently sample (forget for the moment about deciding all languages in BQP), then there must be an efficient classical algorithm to simulate the outputs of a quantum process. But that, in turn, means that there’s some complete description of interacting physical systems that’s vastly more efficient than writing down the entangled wavefunction (i.e., that requires polynomially many rather than exponentially many bits). And this, I think, would be the biggest change in physicists understanding of QM since the 1920′s: it wouldn’t be a local hidden-variable theory, but it would be something even more astounding, a “succinct” hidden-variable theory. I think such a theory would, for example, overturn David Deutsch’s famous argument for the many-worlds interpretation (“where is all the computation happening, if not in parallel universes?”). The answer would be: it’s happening in a single universe that stores the succinct representation of the wavefunction. You’ll notice that in the above argument, I never once mentioned noise. That’s not an accident. For either the noise is enough to enable an efficient classical simulation of the quantum computer, in which case you’re in the situation above, or else it isn’t, in which case some sort of nontrivial quantum computing is possible by definition.” 44. Mike Says: “I think such a theory would, for example, overturn David Deutsch’s famous argument for the many-worlds interpretation (“where is all the computation happening, if not in parallel universes?”). The answer would be: it’s happening in a single universe that stores the succinct representation of the wavefunction” Scott, I’d be interested in your resonse to this. Is there any hint of this — were does the evidence lie. 45. Dave Bacon Says: Scott I do disagree strongly with not including Deutsch-Jozsa in your list. From the abstract to their 1992 paper: “A class of problems is described which can be solved more efficiently by quantum computation than by any classical or stochastic method. The quantum computation solves the problem with certainty in exponentially less time than any classical deterministic computation.” From the paper: “In this paper we demonstrate the importance of quantum processes for issues in computational complexity. We describe a problem which can be solved more efficiently by a quantum computer than by any classical computer. The quantum computer solves the problem with certainty in exponentially less time than any classical deterministic computer, and in somewhat less time than the expected time of any classical stochastic computer.” This still seems to me to be a big step in the “evidence” for quantum computers giving speedup. In particular showing QP not in ZPP relative to their oracle problem seems to me like a pretty good dose of evidence. (Another paper that has somehow been lost to the sands of time that discusses some of these early complexity results is “The quantum challenge to structural complexity theory” by Berthiaume and Brassard.) 46. Dave Bacon Says: Oh and a great quote from that paper (Bethiaume and Brassard) is “It must be pointed out that the class QP was introduced under this very name by Deutsch and Jozsa[9], showing that these physicists are far from ignorant of computational complexity theory. Thus we probably have much more to learn from them than they have from us.” 🙂 47. John Says: “However, reading this article also depressed me, as it dawned on me that the entire thing could have been written fifteen years ago, with only minor changes to the parts about experiment and zero change to the theoretical parts.” Yes, there were some minor inaccuracies, but this is what annoyed me the most. However, I don’t know what Deutsch has done in the last 15 years, either, and the main point of the article was to profile Deutsch not to explain the status of quantum computing. 48. csrster Says: It looks like your character encodings got screwed up on http://www.scottaaronson.com/writings/highschool.html . None of the square roots come out right. 49. Hopefully Anonymous Says: I find it hard to believe the New Yorker article could be better than your blog post on it. Bravo. 50. Hopefully Anonymous Says: By the way, I have no interest in seeing your long form birth certificate, but it would be cool if you scanned and published your transcripts -high school through grad school, to your website. 51. Vadim Pokotilov Says: Had quantum computation been discovered earlier, would P vs NP still be the marquee problem in complexity theory, or would it be BPP vs BQP? It seems to me that P vs. NP is “only” interesting if they turn out to be equal, or resolving it leads to proof techniques that can be used to prove things about more “real” complexity classes, while BPP vs BQP will be interesting no matter what. You said that classical complexity was a prerequisite to quantum complexity, but now that we have quantum complexity, does the classical matter? 52. John Preskill Says: Complexophobia: Most science journalism makes experts squirm uncomfortably. I doubt this is much more true for complexity theory than other topics. (I used to do cosmology …) Evidence: One might say that Feynman presented evidence for exponential speedups in his 1982 paper. His Main Point is that there are quantum simulations that we don’t know how to do efficiently on a classical computer. Deutsch tried to restate the point in complexity theory terms, which was very important. I remember hearing a talk by Seth Lloyd on quantum computing in the early 1990s. Michael Douglas, a theoretical physicist who knows a lot about computer science, was in the audience, too. Mike told me afterwards that the talk had disappointed him because it did not address Deutsch’s Big Idea that quantum computers can achieve exponential speedups. This stirred me to read Deutsch’s 1985 paper, and I was unimpressed. It was “obvious” to me that at best Deutsch was talking about an exponential “speedup” that occurs with exponentially small probability. Later I read the Deutsch-Jozsa paper and I was unimpressed. It was “obvious” to me that the “speedup” resulted from insisting on deterministic rather than probabilistic algorithms. But a few years later I changed my mind … Even so, a lot of things are still “obvious” to me. 53. Scott Says: Had quantum computation been discovered earlier, would P vs NP still be the marquee problem in complexity theory, or would it be BPP vs BQP? No, I think P vs. NP would still be extremely important (along with NP vs. BQP, which you don’t mention). NP is a “real” complexity class, in the sense that NP-complete problems are problems that we’d really like to solve. And I think there’s no real doubt that a proof of P≠NP would require (and lead to) all sorts of other important advances in mathematical knowledge, probably including new algorithms. However, I think the most important point is that the barriers to proving P≠NP, BPP≠BQP, NP⊄BQP, and so on are all essentially the same: relativization, algebrization, natural proofs, and (less formally but most importantly) the huge variety of nontrivial algorithms. Therefore, you might as well focus on whichever of these problems is best suited to your present purpose (connecting to some other field, explaining to a popular audience, etc.), since any major advance on any of them would also constitute a major advance on the others. 54. Scott Says: John Preskill #52: Complexophobia: Most science journalism makes experts squirm uncomfortably. I doubt this is much more true for complexity theory than other topics. I respectfully disagree: it seems to me that the quality of science journalism varies enormously by topic. Daniel Dennett once wrote that no area of science has been better served by its writers than evolution. I would add that no area of science has been worse served by its writers than quantum mechanics! 🙂 To be fair, I think a lot of this has to do with how the scientists themselves write. Charles Darwin and Thomas Huxley both wrote beautifully. By contrast, I’d argue that Bohr and Heisenberg started a tradition of writing really obscurely about quantum mechanics that continued for almost a century, and that’s only started being corrected recently, with the rise of quantum information. (Incidentally, I think Deutsch is completely right on this point. And while I found plenty to disagree with in The Fabric of Reality, at least that book is clear enough that I can pinpoint exactly where the disagreements lie!) This stirred me to read Deutsch’s 1985 paper, and I was unimpressed. It was “obvious” to me that at best Deutsch was talking about an exponential “speedup” that occurs with exponentially small probability. Later I read the Deutsch-Jozsa paper and I was unimpressed. It was “obvious” to me that the “speedup” resulted from insisting on deterministic rather than probabilistic algorithms. Wait, are you arguing against me here? 🙂 I think these anecdotes (which I didn’t know, and which I thank you for sharing) make my point better than I did! The conclusions you drew were entirely reasonable ones to draw from the Deutsch and Deutsch-Jozsa papers—as I said, they’re also the conclusions that I would’ve drawn. A journalist’s spin might be: “ah, how foolish everyone else was, not to understand what Deutsch and Jozsa were driving at!” But science is kind of all about presenting the evidence that what you’re saying is true… 55. John Preskill Says: Scott: “are you arguing against me here?” No, not really. Deutsch and Deutsch-Jozsa were prescient, and for that they deserve much credit. But I would say that as of 1992 the most persuasive evidence for profound quantum speedups came from Feynman’s argument. 56. Scott Says: Deutsch and Deutsch-Jozsa were prescient, and for that they deserve much credit. But I would say that as of 1992 the most persuasive evidence for profound quantum speedups came from Feynman’s argument. Well-put; I’m in complete agreement there. (Except that I’d describe what Feynman gave not so much as an “argument” as a “pointing out of the lack of counterarguments”… 🙂 ) 57. rrtucci Says: “Charles Darwin and Thomas Huxley both wrote beautifully. By contrast, I’d argue that Bohr and Heisenberg started a tradition of writing really obscurely about quantum mechanics that continued for almost a century, and that’s only started being corrected recently, with the rise of quantum information.” You are just focusing on the best biology writers and the worst physics writers. Feynman (his Messenger lecture on quantum mechanics and his QED book) and Freeman Dyson, for example, wrote very clearly, in simple yet accurate terms, about quantum mechanics. I think the New Yorker article was pretty bad, riddled with cliches, but I blame the author Galchen for its shortcomings, not all of physicsdom. Scott, you gotta get over your love-hate relationship with us physicists. Me personally, I love all computational complexity theorists. 58. Scott Says: rr: I agree that Feynman and Dyson both wrote/write beautifully, but they weren’t the “founding fathers” of QM—Bohr and Heisenberg were. I also agree that Bohr and Heisenberg don’t bear direct responsibility for any shortcomings of a modern popular article. They were, however, responsible for starting the tradition of writing about QM in a way that flunks the Minus-Sign Test—a tradition that Feynman later tried his best to correct. Glad to hear about your love for ALL computational complexity theorists! So I assume your days of posting snarky anticomplexity comments on Shtetl-Optimized are over?? 😉 59. Vadim Pokotilov Says: As someone who’s not a scientist, it seems obvious to me (which is how I know that I’m almost certainly wrong): cutting-edge physics and computational complexity are doomed to be forever butchered in the popular press because they’re inherently less accessible. I’m not saying the study of evolution is easy, but people can develop some intuition about it (even if the intuition is more wrong than right). On the other hand, if one wants to even think about quantum Turing machines, one had better be prepared to spend time learning the background. Physics and computer science are huge interests of mine and have been for a while, the life sciences not as much. But if I had to write an article about either evolution or quantum computation, I could much easier bullshit through the evolution one, saying not much of note, but also not betraying as much ignorance. We’re all philosophers, but most of us ain’t quantum physicists. 60. Mike Says: “We’re all philosophers, but most of us ain’t quantum physicists.” This unfortunately sums up my predicament. 🙁 61. Scott Says: Mike #44: The answer would be: it’s happening in a single universe that stores the succinct representation of the wavefunction” Scott, I’d be interested in your resonse to this. Is there any hint of this — were does the evidence lie. Sorry, just realized I never responded to this! There’s been essentially no change here—we still think that quantum computers can’t be efficiently simulated by classical computers. If anything, the evidence for that proposition has gotten stronger in recent years, with, e.g., the results on sampling problems due to Bremner, Jozsa, and Shepherd, and myself and Arkhipov. 62. Scott Says: Vadim and Mike, I think there’s a simple solution to the problem: if you’re writing a popular article about a topic that you don’t understand well, have experts (preferably not the ones you’re writing about) fact-check and “concept-check” the article. Personally, I’ve been happy to do that for every journalist who’s ever asked me to, and I know many others would be happy too. Fact-checking a popular article is also an excellent experience for grad students. 63. Kamal Says: I don’t hear Bohmian mechanics discussed much anymore. Is there any work being done to advance it? 64. Scott Says: Kamal: Not many people work on Bohmian mechanics, but a few who do (who I know about) are Sheldon Goldstein and Roderich Tumulka at Rutgers, and Antony Valentini at Perimeter Institute. Search the arXiv for their papers if you’re interested. 65. Kamal Says: Thanks Scott 66. Mike Says: Scott, “If anything, the evidence for that proposition has gotten stronger in recent years” That’s what I thought — thanks. 67. Vadim Pokotilov Says: Re-reading my comment, I just want to make sure I wasn’t coming off as dissing actual philosophers; philosophy isn’t something I know much about and I’m sure there’s a lot of study involved in mastering it. I just meant that there are more armchair philosophers than armchair quantum physicists or complexity theorists. 68. matt Says: To Scott’s comment 56: I think that the strongest argument for the difficulty of simulating quantum systems on classical computers is STILL the lack of counter-arguments. While Scott once commented to me that he regarded Shor’s algorithm as having provided strong evidence for the difficulty of quantum simulation given the belief in the difficulty of factoring, I (respectfully!) regard this statement as nonsense: many more smart people have put in many more hours trying to figure out how to simulate quantum systems than have put in hours trying to figure out how to factor. And up until relatively recently, they had much more motivation to do so (and even today, I’d say they still have more motivation to do so). So, if you want to base arguments for the difficulty of a problem on people’s inability thus far to solve it, we should just look at people’s inability to simulate quantum systems as being the best argument in that vein. Other than arguments based on “we don’t know how to solve this problem”, there is also the argument that “solving this problem implies solving many other problems”; basically, arguments based on reductions. Such arguments are strongest when applied to the problem that you can reduce other problems too; i.e., they are much stronger when applied to simulating quantum systems than to factoring. 69. Scott Says: Matt, I respectfully think you’re talking nonsense! Firstly, before the quantum information era, my understanding is that most physicists did not stress the exponentiality of Hilbert space, nor did they regard quantum simulation as having some qualitatively different level of hardness than classical simulation. (If they had, then they would’ve invented quantum computing in the 50s or 60s, before Feynman and Deutsch!) Rather, their impulse was to throw DMRG, Wick rotation, or some other calculational tool at the quantum simulation problem and hope it worked (a hope, of course, that was very often borne out!). I don’t know if any physicists of the 50s or 60s explicitly considered the question of whether there’s any such ansatz that always works—but I bet they would’ve been surprised if you told them that, if there was one, you could also use it to factor integers. To lay my cards on the table: Before Bernstein-Vazirani, Simon, or Shor’s algorithms, I probably would’ve given 50% odds for BPP≠BQP. After Shor’s algorithm, I give it 95% odds. If someone told me tomorrow that factoring was in BPP, I’d scale back to 75% odds. 70. matt Says: Scott, I think this is a question of where each of us came from. I come from physics, and place more trust in the physicist’s abilities, you come from CS and place more trust in the computer scientists. To make the case that people knew about the importance of exponential growth in Hilbert space well before the 70s: Exact diagonalization dates back I think to the 50s. Early papers were, if I recall correctly, doing ED on maybe 11 sites (Bonner and Fisher on the Heisenberg chain). They clearly knew that the problem grew exponentially that way, as they simulated the biggest thing they could. At this point I’m just going to point to all those methods you mention, plus much older methods like perturbation theory and density functional theory and say that smart people wouldn’t have put all that effort into developing those methods if they thought that there was a faster way. Now, I feel your reply above is essentially like saying (to pick a specific example): “The development of perturbation theory for quantum systems does not in itself imply any understanding that simulating quantum systems is harder than simulating classical systems, since people also developed perturbation theory for problems like turbulence which can be simulated on a classical computer with a cost that merely grows polynomially with system size rather than exponentially”. And that last is correct as a statement in itself. But since people were trying brute force methods for quantum as early as the 50s, I feel that they were aware that the simulation difficulty grew exponentially. That is, people tried and abandoned the brute force approach much more readily in the quantum world than they did in the classical world. In fact, in the quantum world it took quite a long time for ED to return to favor….the Bonner and Fisher Heisenberg chain simulations were not repeated for a long time, even when computers did allow much larger simulations, leading to some confusion. So, I think people almost got _too_ scared of the exponential growth. On the classical side, too, of course, computer simulation of turbulence also fell out of favor for a time, but computer simulations of non-turbulent systems were, I think, common since WW II. So, I don’t think that there was as great a fear of classical simulation. Also, in many cases the approximate methods used for quantum systems were still numerical, like density functional theory, while for a long time the goal of perturbation theory for classical system was to get better understanding by pen-and-paper calculations. So, I think there was a difference in motivation for developing alternate methods: on the quantum side, people were happy to develop alternate, approximate methods that were still numerical, while on the classical side, the alternate methods were mostly preferred because they were non-numerical (one can point to a possible exception in numerical series calculations for classical stat mech, again dating to the 50’s—this is amusing from a complexity standpoint as they were concerned in this case with problems that could be handled by Monte Carlo simulation, i.e., using randomness, and that is sort of intermediate between P and BQP). So, I’m standing by: if someone like Michael Fisher had been told the meaning of BPP and BQP back in 1955, he’d have put near certainty on them being different, regardless of factoring. Anyway, it’s interesting to get your perspective…I feel like it’s a result of coming from different fields, so that can make me re-evaluate my own judgments also. 71. matt Says: Btw—I will definitely agree that all those physicists would have been surprised by the fact that an ansatz that always works allows you to factor integers. That is incredibly surprising. But, my point is that they probably would have felt that they would have had more important problems to solve with an ansatz that always worked (especially since RSA wasn’t around yet) 72. Raoul Ohio Says: You don’t have to study a lot of philosophy to have fun debating with philosophers. The field is very buzz-word oriented, and debating issues is kind of like poker: I see your “Wittgenstein’s glass ship” and I raise you “deterministic skip lists” and “the trichotomy”; and so on. And if you are tired and just want to get them off the porch, pay for the pizza and give him a tip. 73. Gil Kalai Says: It is interesting indeed to compare the evidence for exponential speedup proposed in Feynman’s paper and the evidence from Shor’s algorithm. The fact that certain computations in quantum mechanics which are supposed to describe some natural phenomena require exponentially many steps on a digital computer may have several alternative explanations. One explanation is that the computation can be carried out by a clever polynomial time algorithm. (Example: computing determinants from the definition is exponential but it is coputationally easy.) In hindsight although remarkable improved computational methods for these physics computations were found over the years it seems correct that the type of computations Feynmann refereed to are computationally hard for classical computers. (Probably there is a lot yet to prove there even regarding making these computations with a quantum computer, and certainly regarding hardness.) Another explanation is that for cases that the computations Feynman referred to are computationally intractable (on a classical computer) they fail to describe the natural quantity they are suppose to describe. This, of course, does not mean that QM is wrong but just that a certain computation based on certain modeling via QM is wrong in some ranges of parameters. This still looks like a reasonable possibility. (It is reasonable even if quantum computers can be constructed and even more so if it is impossible to build them.) Shor’s algorithm is a different and more dramatic sort of evidence. The case for exponential speedup compared to classical computers is very strong. The connection to real-life physics is not as close. 74. Greg Says: Hello: I am not a physicist, I am a molecular biologist who is interested in mechanisms controlling mutation, such as somatic hypermutation, where B-cells carry out programmed mutation of immunoglobulin genes in response to stimulation by antigen. Please excuse my ignorance on the subject of quantum mechanics, but I was wondering if there is any theoretical possibility that DNA bases could ever exist in a state of superposition? After reading ‘The New Yorker’ article, including the description of a two qubit quantum computer that can find the Queen among four “cards” in only one “try”, I imagined the notion of a B-cell using a similar strategy to “find” the “correct” mutation directing higher affinity of an encoded immunoglobulin to an antigen. Is this an absurd idea? I have spent the past few days attempting to understand the “four- card-monte experiment” and confess that I can’t conceptualize it: I imagine a conventional computer as a person who turns over individual cards, one at a time, but a quantum computer as a person who “touches” all four cards simultaneously, and somehow, through this process, gleans the information, without having to turn the cards over, and decides correctly without inspection. Again, I apologize for my ignorance; perhaps my post will stand to illustrate flawed popular conceptions of quantum mechanics. 75. matt Says: Gil, why do you think factoring is hard classically? 76. Scott Says: Greg: DNA certainly has quantum-mechanical aspects; I remember reading about how they were important for Watson and Crick when they worked out the DNA structure. But while I know next to nothing about biochemistry, I doubt that quantum coherence could be maintained in DNA long enough to carry out a “Grover search” even on a 4-element space. (Maybe others can correct me if I’m wrong.) The main point to understand is that, to search a list of N items, Grover’s algorithm requires about (π/4)√N steps. That’s substantially less than N—which is amazing!—but it still does scale with N, so that (for example) if N=101000 you’d need ~10500 steps. When N=4, it so happens that you can do the search in 1 step, but for larger N you need more steps. 77. Mike Says: Scott, “The main point to understand is that, to search a list of N items, Grover’s algorithm requires about (π/4)√N steps. That’s substantially less than N—which is amazing!—but it still does scale with N, so that (for example) if N=101000 you’d need ~10500 steps. When N=4, it so happens that you can do the search in 1 step, but for larger N you need more steps.” I think I understand this, but how do the number of qubits factor in? On the assumption that you could precisely (enough) manipulate, say, 1000 “genuine” qubits, what impact? How about 10,000? I know that’s a pretty big assumption, and I don’t have any idea how to do it myself, but I’m taking the position that I’m allowed to just view it all as an “engineering” problem, since (I believe) the concept is proved, at leave in ‘principle.” Thanks. 78. Chesire Says: You’re probably right about Grover DNA, but quantum superpositions have been observed in large organic molecules (up to 430 atoms): “Fattening up Schroedinger’s cat”: http://www.nature.com/news/2011/110405/full/news.2011.210.html “Quantum interference of large organic molecules”: http://www.nature.com/ncomms/journal/v2/n4/full/ncomms1263.html (Note: viruses have also been suggested as possible: http://arxiv.org/abs/0909.1469 ) 79. Scott Says: Mike: With k qubits, you could use Grover’s algorithm to search a list of size roughly N=2k, using ~2k/2 steps. So if k=1000 (say), then the number of steps to run a Grover search of the appropriate size is the limiting factor, not the number of qubits. On the other hand, with 1000 qubits you could run Shor’s algorithm to factor a decent-sized number, and that would take a reasonable number of steps also. 80. Mike Says: Thanks Scott. So it really does depend on the exploitable feature of the particular problem. And, I guess that means that engineering isn’t everything. However, if engineering breakthroughs do continue occur (OK, another assumption, but I think a safe bet at least to some limit), does a Grover search just become, at some point, in some sense, just a brute force calculation? 81. Mike Says: Scott, Of course, I meant a brute force “quantum” calculation. 82. Slipper.Mystery Says: Comment #77: Usually it is assumed that classical computer technology will improve fast enough to overwhelm any sqrt advantage to Grover class algorithms. By contrast, according to Wikipedia the largest RSA number factored (late 2009) has 232 decimal digits (768 bits): http://eprint.iacr.org/2010/006.pdf . This took 10^{20} operations in roughly two years (equivalent to using one thousand 2.2GHz single core processors). Roughly approximating the number of operations to factor a number with d decimal digits as exp(d^{1/3}), to fix exponent use use estimate from above reference that a 309 decimal digit number (1024 bit) would take about 1000 times longer, gives 660 billion years to factor a number with 617 decimal digits (2048 bit, largest of the http://en.wikipedia.org/wiki/RSA_numbers) using the same technology, and even with a million times as many processors (a billion 2.2GHz cores) still a few hundred thousand years. A quantum computer could do that last one with on the order of a few 100k gates in principle, hence on the order of seconds even for microsecond gates. That’s the virtue of exponential speedup that Scott has emphasized. But re the relation BQP and BPP, from Simon’s problem we know that the quantum computer has an exponential speedup over any classical algorithm. But Scott describes this only as “evidence” that the two complexity classes differ, because it is only relative to Simon’s oracle. For the uninitiated, what does this mean? Why doesn’t such an example show that BQP is bigger than BPP? For our intuition, what more would one need to demonstrate that BQP is bigger? What does it mean to a complexity theorist to say that quantum algorithms like Grover’s, Simon’s (and certainly Shor’s period finding, if not factoring), all provably faster than any classical algorithm, are not proof that the complexity class BQP is bigger than BPP. (I apologize if this has been multiply answered in other threads on this blog, and everyone else has already mastered the role of relativization in complexity theory.) 83. Slipper.Mystery Says: > “For our intuition, what more would one need …” OK, I found some useful intuition here http://terrytao.wordpress.com/2009/08/01/pnp-relativisation-and-multiple-choice-exams/ regarding the Baker-Gill-Solovay no-go result for P vs. NP relativization, but would be curious if an expert (“are we ready, Mr. Scott?”) could summarize the analogous situation for BPP vs. BQP. 84. Scott Says: Slipper.Mystery: So your question is, why doesn’t the oracle separation already yield a proof of BPP≠BQP in the unrelativized world? The problem is that, once you get rid of the oracle, there might well be clever classical algorithms for solving all the problems that quantum computers can solve (so that BPP would equal BQP). So for example, maybe every explicit function instantiating the Shor or Simon oracles has some additional structure that makes the problem easy to solve classically. We don’t think that’s the case, but no one can rule it out, and doing so is probably harder than proving P≠NP! (Though there’s no formal implication in either direction.) By analogy, it’s easy to construct an oracle relative to which P≠BPP (i.e., randomized algorithms can do things in polynomial time that deterministic ones can’t). But almost all theoretical computer scientists believe that in the “real,” unrelativized world, P=BPP, since it seems possible to construct extremely good pseudorandom number generators, which are good enough to “derandomize” every BPP algorithm. So, there are certainly cases (even provable cases, like IP=PSPACE) where two complexity classes are equal, despite the existence of oracles that separate them. If you like, the existence of a separating oracle is merely the “first test” that two classes have to pass, on the way to being unequal! Two complexity classes can be equal for “nonrelativizing reasons” (reasons that don’t hold with an oracle around), as we believe is the case for P and BPP, and know is the case for IP and PSPACE. We think BPP and BQP will pass all the remaining, harder tests; and that there’s no clever “pseudoquantum generator” (or whatever 🙂 ) to dequantize every unrelativized quantum algorithm. But if you prove that, you also prove P≠PSPACE, and probably collect a Fields Medal. 85. Scott Says: Mike #80: So it really does depend on the exploitable feature of the particular problem. YES! And, I guess that means that engineering isn’t everything. YES! However, if engineering breakthroughs do continue occur (OK, another assumption, but I think a safe bet at least to some limit), does a Grover search just become, at some point, in some sense, just a brute force calculation? I’m not sure I understand what you’re asking. Yes, today we usually do think about Grover’s algorithm as just “quantum brute-force search.” But of course, quantum brute-force is quadratically faster than classical brute-force! 86. Slipper.Mystery Says: Scott: “So for example, maybe every explicit function instantiating the Shor or Simon oracles has some additional structure that makes the problem easy to solve classically.” Ok, tks for this. One last naive question for more concrete intuition: a Simon oracle could be to take a period ‘a’ of n random bits, Xor it with n bit binary numbers to give 2^{n-1} pairs, and map those pairs via a random permutation to (n-1) bit numbers, with the oracle then a look-up table for this (random) map from {0,1}^n->{0,1}^{n-1}. What additional structure might this have to make it easy (i.e., O(n) rather than O(2^{n/2}) to find ‘a’ classically, or is the function insufficiently explicit? “and probably collect a Fields Medal.” unless one is over 40 87. Pseudonymous Says: @Scott: Off the topic, but you often mention protein folding as a “naturally occurring NP complete problem”…Are there any good review papers that mathematically express the problem and talk about the heuristics used to solve particular instances of it? 88. Scott Says: Slipper.Mystery #86: As it happens, we currently don’t know any way to instantiate the Simon oracle to get a hard problem! (And for the Shor oracle, of course we can instantiate with the modular exponentiation function, but not much else…) The way you proposed doesn’t seem to work, since what’s the “random permutation”? How do you implement it in polynomial time? To “instantiate” the oracle function f, you need to be able to give someone a polynomial-time algorithm for computing f (with no additional secrets), such that it’s still hard for them to determine the hidden period. And that’s what no one knows how to do in the case of the Simon oracle. (Sure, you can describe a polynomial-time computable function f such that f(x)=f(x+a) for all x, but all the obvious ways to do so will reveal a…) 89. Scott Says: Pseudonymous #87: I tried and got several papers mathematically expressing the problem (in particular, in the “hydrophobic-hydrophilic” or “HP” model, which are keywords you might want to use). 90. Gil Kalai Says: Matt #75, I did not say that I think factoring is hard classically. (If I had to guess I would guess that indeed it is for various reasons, but there are people who will make the opposite guess.) Shor’s algorithm is a very dramatic example of exponential speedup between what we can do with classic computers and what we can do with quantum computers. It applies to a general complexity theoretic question (with major practical relevance) which was extensively studied before (not in the context of quantum mechanics). The computations Feynman talked about in his paper are indeed very interesting. In hindsight, I wouldn’t be surprised that general form of such computations are BQP-complete (or something like that.) (But I am not sure Feynman was explicit enough so we can talk about “general form of such computations.”) So they are perhaps even “harder” than factoring. But at the time the paper was written I dont think there were clear insights that this is genuinly computationally difficult. Also (like for protein folding that was just mentioned) the fact that a general problem of some type is computationally hard does not mean that instances of the problem we encounter naturally are hard as well. Now suppose you have a concrete quantum mechanical computation that is feasible for the Hydrogen atom, and with immense effort you can make a similar computation for the Helium atom. Now you want to make the same computation for a large complicated atom. You have a complicated formula but evaluating it requires huge, completely hopeless computations on a classic computer. It is interesting which scenario is closer to the truth: 1) There are miraculous simplification which will allow to carry out precisely these complicated computations for large atoms on classical computers. 2) The computation is indeed infeasible classically but we will be able to carry them with quantum computers and they will give results with great agreement with experiments. 3) The same as 2) but when we run the quantum computer the results which were so good for simple systems will not match what we see in nature. 91. Slipper.Mystery Says: Scott #88: “To “instantiate” the oracle function f, you need to be able to give someone a polynomial-time algorithm for computing f (with no additional secrets), such that it’s still hard for them to determine the hidden period.” Ah, relativized illumination slowly dawns, with only two invocations of the complexity oracle (with respect to which many such NP questions are equivalent to P, but only if the black box function underlying this blog’s responses is arbitrarily trustworthy) 92. Kvantum Says: I think it’s a bit mis-leading to say that almost noone works on de-Broglie Bohm. Last year there were a conference in Europe dedicated to dBB alone which had more people than any other interpretation-related conference ever had. So it’s safe to say it’s one of the most active fields in interpretation (possibly more active than Everett). However there is very little SCIFI selling factor over dBB, so it doesn’t really get much pop-sci coverage. Scott: What is your current position? MWI? I watched a blog talk by you 1-2 years ago I think, but you didn’t make it clear exactly what your “gut” feeling is. 93. Scott Says: Kvantum: I know there are other people working on deBroglie-Bohm; I just mentioned the three who I’ve heard of (which is probably correlated with: ones in North America, ones who talk to quantum information / quantum foundations people not working on dBB). I thought I made my “current position” clear in this post! Sometime in graduate school, I realized that I was less interested in winning philosophical debates than in discovering new phenomena for philosophers to debate about. Why brood over the true meaning of (say) Gödel’s Theorem or the Bell Inequality, when there are probably other such worldview-changing results still to be found, and those results might render the brooding irrelevant anyway? Because of this attitude, I confess to being less interested in whether Many-Worlds is true than in whether it’s scientifically fruitful. As Peter Shor once memorably put it on this blog: why not be a Many-Worlder on Monday, a Bohmian on Tuesday, and a Copenhagenist on Wednesday, if that’s what helps you prove new theorems? 94. matt Says: Gil, I’d also call the problem of quantum simulation a problem of major practical importance. In fact, I think except for RSA and other cryptography applications, none of which existed before the 70s, I don’t know any practical importance to factoring, but perhaps you can correct me on that. So, on the grounds of importance, quantum simulation wins hands down….as in, I’d guess 2-3 orders of magnitude more person-hours have been spent working on that problem, by smart people too such as von Neumann, Dirac, Feynman (I’m not talking about quantum computing here, I’m talking about diagrams, which are a calculational tool), Anderson, White, Wilson, Schwinger, Mott, Bardeen, Schrieffer, Kohn, Luttinger, Pines, etc, etc… Regarding Feynman’s paper (and Manin’s, and others) we can argue about the generality of problems considered. My feeling is that on the one hand, a physicists in the 50s would have had to be a bit silly not to think that quantum simulation was hard, simply because people were doing quantum simulation then and were seeing the exponential growth in practice! On the other hand, people certainly had not formalized the notion of hardness. To my mind, this formalization of different complexity classes is one of the major contributions of computer science. And considering that even formalizing NP took until the 70s (long after computer scientists had encountered many NP-hard problems), I won’t claim that physicists would have some formal understanding of why it is hard. They just would have known it, in the same sense as a pre-Cook-Levin computer scientist would have had some feeling that SAT was a tricky problem (perhaps, the physicist would have had a better sense, as they would have been concerned with practical implementation). I don’t necessarily think even a fast classical algorithm for factoring should dent our belief that BQP and P differ, assuming that the fast algorithm for factoring worked in some way completely unrelated to Shor’s algorithm. After all, it would simply expose some extra structure in integers, but wouldn’t necessarily expose extra structure in quantum mechanics. 95. matt Says: I’m going to slightly down-revise my 2-3 order of magnitude effort difference above…I was counting all work on quantum many-body as work towards more efficient quantum simulation (which it is! looking at things like emergent gauge theories in correlated electron systems is simply looking for hidden structure that allows a simpler understanding of those systems in terms of degrees of freedom that might be weakly coupled), but then to be fair I should count all work on number theory as work on factoring (which it also is, to the extent that it is concerned with finding extra structure in integers). Still, I’m going with more than 1 order of magnitude difference in effort. 96. Kvantum Says: Scott: In your profession I can easily understand your thinking, whatever interpretation helps you do science better at the time is obviously the best model to hold in your head at that time. However I find it hard to believe that you haven’t spent any sleepless nights, or atleast hours over some wineglasses thinking about this at all? If you are 100% agnostic with no opinion what so ever, that’s pretty rare ? Here is a list over attendees and more info about the dBB conference last year by the way: http://www.vallico.net/tti/master.html?http://www.vallico.net/tti/deBB_10/conference.html 97. Maxwell daemon Says: Rereading your “Quantum Computing Since Democritus Lecture 9”, I noticed a curiosity that perhaps went unnoticed to you: Fermat’s last theorem implies that if you use a norm other than 1 or 2, your qubits can’t have rational amplitudes. So if you believe that God really loves pure qubits and rational numbers, you have a good reason to use the 2-norm for quantum mechanics! I would go even further and propose a conjecture: the rationals are dense in the 2-norm unit sphere for vectors of any dimension. And they are not dense in the unit sphere of any other norm > 2. Unfortunately, I think the second part of the conjecture is false. 98. Scott Says: Kvantum #96: However I find it hard to believe that you haven’t spent any sleepless nights, or atleast hours over some wineglasses thinking about this at all? I spent plenty of sleepless nights on the interpretation of QM as a graduate student at Berkeley—but then felt like I’d given enough of my life over to it, and was ready to move on. 🙂 I think Many-Worlds does a better job than its competitors (Bohmian mechanics, the “Copenhagen interpretation” (whatever that is)) at emphasizing the aspect of QM—the exponentiality of Hilbert space—that most deserves emphasizing, at least from a modern quantum computing standpoint. As Steven Weinberg once put it, Many-Worlds is like democracy: “terrible except for the alternatives.” But my real hope is that we’ll learn something new someday that changes the entire terms of the debate. 99. Mike Says: “As Steven Weinberg once put it, Many-Worlds is like democracy: “terrible except for the alternatives.” But my real hope is that we’ll learn something new someday that changes the entire terms of the debate.” Scott, What might that be? What are the clues you’re most intrigued by? 100. Raoul Ohio Says: Maxwell daemon: By “the rationals are dense in the 2-norm unit sphere for vectors of any dimension. And they are not dense in the unit sphere of any other norm > 2.”, do you refer to Q^n, the subset of R^n consisting of vectors x such that each component of x is rational? If so, Q^n is very likely to be dense in the unit ball of R^n for any norm you can dream up. This is very easy if n is finite and the norm is a q-norm, for q in [1,inf]. 101. Scott Says: Mike: What are the clues you’re most intrigued by? I don’t know of any—and if I did, I doubt I’d be able to discuss them in the space of a blog comment! 🙂 Basically, I think we’re in the intellectually-unsatisfying position of David Hume, who knew that the then-prevailing explanations for biology were bad, but didn’t have anything better to replace them with. (In that case, the gap between Dialogues Concerning Natural Religion and Darwin’s Origin of Species was 80 years.) 102. Maxwell daemon Says: Not the unit ball, the unit sphere. I’m afraid I was unclear. Fermat’s last theorem states that $x^p + y^p = z^p$ has no nontrivial solutions for p > 2 and positive integers x,y and z. Divide everything by $z^p$ and you have the statement that $a^p + b^p = 1$ has no nontrivial solutions for p > 2 and positive rationals a and b. In other words $\|(a,b)\|_p = 1$ only has rational solutions if p = 2 or p = 1. What I’m trying to conjecture is some kind of generalisation of Fermat’s last theorem, to further drive the point that the 2-norm is God’s choice for quantum mechanics, not only for qubits, but qudits of any dimension. The problem is that a naïve generalisation is false, and even a not so naïve (Euler’s conjecture) is also false, so I think the religious argument only works for qubits. 103. Maxwell daemon Says: Oh come on, Scott, no LaTeX here? 104. Kvantum Says: Scott: I actually disagree, because Born Rule is basically what QM is all about. It’s the reason we take QM seriously and “pure MWI” cannot derive it! Bohm can. Pure MWI needs to postulate something to get Born Rule, so why not postulate particles? What is more natural than particles? I feel MWI’ers are somewhat dishonest in their debates when they claim that MWI doesn’t violate Occam’s Razor and that MWI is only “QM with the assumption that the wavefunction is real”, because in this view you wont get probabilities. So obviously this bare view is incorrect and then you have to postulate something, and it’s suddenly not as “simple & attractive” anymore… You say you hope that there will be some new discoveries, are there any hints that this might occur? I feel like most phycistis and philosophers regard QM as “the final theory” and that we have to accept the weirdness. 105. Scott Says: Kvantum: Every interpretation of QM is “somewhat dishonest” in the same sense (of talking about QM in a way that sweeps crucial points under the rug): for example, Bohmian mechanics shunts all the complexity of QM into an innocuously-named “guiding wave.” For me, though, the more serious problem with Bohmian mechanics is that, if you want “determinism” (which was the main original selling point), then that only works for a very specific physical system: particles (with no spin) moving around in a continuous space. As soon as you want to incorporate finite-dimensional observables (such as spin), it’s easy to prove that the hidden-variable values at earlier times can no longer determine the values at later times. More broadly, you then confront the question: why the particular evolution equation suggested by Bohm, rather than any number of other evolution equations I could also write down that equally well imply the Born rule and are equally compatible with experiment? Sure, in the specific case of particle positions and momenta, there’s one rule that looks kind of neat, but for other observables, I can write down all sorts of “guiding equations” with no clear way ever to decide between them. Anyway, I don’t deny that hidden-variable theories lead to lots of interesting mathematical questions—in fact, I got interested enough in grad school to write a whole long paper about the computational complexity of hidden-variable theories, which maybe would interest you. 106. Greg Kuperberg Says: Since we have digressed into the topic of interpretations: There is ONE interpretation of quantum “mechanics” that I have found to be far-and-away the most useful for actually believing quantum mechanics than any of the alternatives: That quantum mechanics, or more precisely quantum probability, is exactly probability theory except with non-commuting random variables. This interpretation has a proven track record of persuasion of research mathematicians, and of deflating certain distracting questions of quantum philosophy. The so-called “many worlds” interpretation, if you interpret it as a description of path summation, is also sometimes useful or even very useful for understanding certain mathematical arguments in both classical and quantum probability. However, if it were fundamental, it would for instance be a baffling point that BQP probability does not contain NP (or even PP), since after all NP is another complexity class that can be defined using “many worlds”. Many science journalists are baffled at exactly this point, which irked Scott enough to make a statement which is still there at the top: “Quantum computers are not known to be able to solve NP-complete problems in polynomial time.” 107. Mike Says: Greg, I don’t think thoughtful MWI proponents disagree with Scott’s (modified 🙂 ) statement at the top. During the recent excitement about the claimed proof that P is not equal to NP, Deutsch made a comment that touches on this point: “P and NP are complexity classes for computations performed by the universal Turing machine . . . We already know that computers harnessing quantum interference have a very different complexity theory from Turing machines. It so happens that such computers cannot solve NP-complete problems efficiently (though when I last asked, this had not yet been proved with the rigor that mathematicians require) . . .” I suspect that he simultaneously thinks that the MWI is “fundamental” — though I’m uncertain exactly what you meant by that — accepts that the discovery of new physical phenomena could change this conclusion, and, incidentally, the corresponding complexity analysis, and recognizes that quantum computers are not known to be able to solve NP-complete problems in polynomial time. Although as often the case, there’s a good chance that my understanding is limited or just plain wrong. 108. Greg Kuperberg Says: Mike – Certainly thoughtful MWI would agree with Scott’s statement. Scott’s statement is a question of mathematical results, not interpretation. Thoughtful proponents of any interpretation would be careful to obtain the same actual answers as everyone else. So the question is not that, the question is how hard it is to be thoughtful if you espouse a particular interpretation. 109. Scott Says: Mike #107: It so happens that such computers cannot solve NP-complete problems efficiently (though when I last asked, this had not yet been proved with the rigor that mathematicians require) Can you point me to where Deutsch wrote this (unless it was a private discussion)? “When I last asked, this had not yet been proved with the rigor that mathematicians require” is a good candidate for the understatement of the century… 😉 (Note that we’re talking about something strictly harder to prove than P≠NP.) 110. Mike Says: Scott, Well, the century is still young — so there may be time for additional understatements 😉 It was in the FoR discussion thread on Yahoo: http://groups.yahoo.com/group/Fabric-of-Reality/message/18235 I’m sure you can find further grist for the mill scattered among his “informal” comments. 🙂 Greg, “the question is how hard it is to be thoughtful if you espouse a particular interpretation” I don’t know — but probably not NP complete 🙂 111. melior Says: Over the years, I’ve developed what I call the Minus-Sign Test Thanks for this brilliant nugget! A little light went off in my head when I absorbed this insightful little gem. 112. Daniel Tung Says: Looks like Deutsch do know that he is weird… 🙂 I have some reservation on your emphasis on the usefulness of interpretations rather than their truths. It’s only when your believe deeply in certain things that you’ll mentally look for deeper possibilities. Shor’s suggestion seems superficial to me. 113. Factor quema grasa Says: Hi, @MIKE yes! my also I like this point 🙂 “and can be simulated classically with exponential slowdown”
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6752403378486633, "perplexity": 1184.9661849939255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988725451.13/warc/CC-MAIN-20161020183845-00500-ip-10-171-6-4.ec2.internal.warc.gz"}
https://statisticshelp.org/confidence_intervals_problems_1.php
Select Page Use the margin of error E = $139, confidence level of 99%, and a =$513 to find the minimum sample size needed to estimate an unknown population mean, p. Solution: Again, we know that This means that GO TO NEXT PROBLEM >>
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8795048594474792, "perplexity": 1394.709066432874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588284.71/warc/CC-MAIN-20211028100619-20211028130619-00705.warc.gz"}
http://www.physicsforums.com/showthread.php?t=138470
by superpig10000 Tags: oscillation, stupid P: 8 A particle of mass m is moving under the ocmbined action of the forces -kx, a damping force -2mb (dx/dt) and a driving force Ft. Express the solutions in terms of intial position x(t=0) and the initial velocity of the particle. For the complementary solution, use x(t) = e^(-bt) A sin (w1t + theta) And the for the particular solution, use Ct + D w1^2 = w0^2 - b^2 w0^2 = k/m Here's what I have so far: m (dx^2/dt^2) + 2mb (dx/dt) + kx = Ft (dx^2/dt^2) + 2b (dx/dt) + w0^2 x = At (A = F/m) The complementary solution is x = e^(-bt) (A1e^(w1t) + A2e^-(w1t). I dont know how to convert this to the form above. And I am totally clueless as to how to find the particular solution. Please help! Related Discussions Introductory Physics Homework 1 Introductory Physics Homework 2 Introductory Physics Homework 2 General Physics 5 Classical Physics 0
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8229445815086365, "perplexity": 1276.0240041220463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997901589.57/warc/CC-MAIN-20140722025821-00115-ip-10-33-131-23.ec2.internal.warc.gz"}
https://blog.hotwhopper.com/2018/01/the-latest-conspiracy-theory-from-wuwt.html
. ## The latest conspiracy theory from WUWT science deniers - losing their grip on ice Sou | 2:52 AM The latest conspiracy theory from science deniers at WUWT is that the National Snow and Ice Data Center (NSIDC) is up to something nefarious. (Seeing nefarious intent in the most innocuous actions is one of the hallmarks of conspiratorial thinking.) All the fuss was about a new version of NSIDC's Sea Ice Index. It's gone from version 2 to version 3. In the latest version, monthly averages are calculated in a different way. The new version only affects monthly averages, not anything else. From the analysis report: The Sea Ice Index has been updated to Version 3 (V3). The key update in V3 is a change in the method for calculating the numerical monthly averages of sea ice extent and sea ice area data values; that is, the data distributed in .csv and .xlsx format. This change impacts only the monthly data values in the Sea Ice Index time series and not monthly sea ice extent and concentration maps that accompany the data product, that is, the .png, .tif, and shapefile archives. Daily data are also not impacted, nor are any current conclusions drawn from the Sea Ice Index data set about the state of sea ice in either the Arctic or the Antarctic. This change is being made in response to questions raised by users of the product concerning how the monthly average ice extent and areas are calculated. The difference between the version in calculating monthly averages is explained in the report. The previous version worked out the 15% threshold (for ice extent) for each grid cell after getting a monthly average from the daily data.  The new version works out the 15% threshold each day before averaging data.  This means that now daily data is not included for any grid cell that doesn't have 15% ice, whereas previously it may have been included. Here is how NSIDC describes it: Monthly averages of numerical ice concentration data can be calculated through two different methods: 1. summing ice concentration data at each grid cell throughout a month, dividing by the number of days within a particular month to get average concentration for that grid cell, and then applying the 15 percent concentration threshold to the gridded field of average ice concentrations before deriving monthly area and extent, or 2. applying the 15 percent concentration threshold to the daily gridded field of concentration data before deriving that day’s area and extent; and then simply averaging those daily values over the course of the month. The former method is the basis for the numerical algorithm in V2, while the latter describes V3. If you're wondering how this affects the rate of change in sea ice extent in the Arctic, the report has a table for that. The only month affected to any great extent is October, when ice is rapidly forming: #### Multiple conspiracy ideation criteria Climate conspiracy blogger Anthony Watts came up with the headline: "Bad Science: NSIDC disappears Arctic sea ice extent going back years". He claimed: From the “Arctic is screaming louder thanks to Mark Serreze and his adjustment shenanigans” department, I don’t think this is going to fly. Some of the adjustments are as much as 1.2 million square kilometers of sea ice, which is as much as some yearly variations. -Anthony I've written previously about criteria used to identify and define conspiracy ideation. Obviously, given the "adjustment shenanigans" there are questionable motives and undoubtedly nefarious intent (criteria no. 1) because "nothing occurs by accident" (criteria no. 4). And since Anthony doesn't think "this is going to fly", he figures others will also think "something must be wrong" (criteria no. 5).  Well, it "flew" back in October last year and Version 3 of the index is still "flying", so what's the bet that Anthony's "don't think" loses out. I don't know what he's referring to with his 1.2 million sq km. Here is Table 5 from the analysis, showing the change in climatology (1981-2010) for each month, between v2 and v3 of the sea ice index. Maybe he was talking about the difference in the minimum for the month of October, in Table 14 of the report: Even if he got the 1.2 million sq km from that table, I'm still puzzled by his "which is as much as some yearly variations". There is a huge monthly difference between extent in September and that in November (which straddle October, if you're wondering why I chose those months). Since 1979, the difference ranges from 3.17 to more than 6 million sq km, using the previous version, which is a lot more than Anthony's 1.2 million sq km.  There's a reason October shows the biggest change - it has the biggest change. (See below.) Since Anthony hardly ever puts digits to keyboard these days, the rest of the article was from a denier-baiting "guest" called Tom Wiita - presumably meaning Tom works for Anthony for free (is there any other way?). Tom was geeing up the WUWT rabble, pointing to the November NSIDC report, which reported October data, for which the change was the largest in the Arctic. Down the bottom he added this conspiratorial tidbit: Antarctic sea ice extent is growing faster after this change. But of course, as usual, they put anti-narrative results someplace safe, like into Antarctic sea ice growth... He implies that NSIDC should have written about Antarctic sea ice in the section on Arctic sea ice and to not do so shows nefarious intent (criteria no. 1) and "something must be wrong" (criteria no. 5). What a plonker! #### The biggest difference is in October for Arctic sea ice The months in which there is the greatest difference with the changed approach are the ones where sea ice is changing most rapidly. From the report: There is a seasonality to V2 and V3 differences. In both the Arctic and Antarctic, the largest difference in monthly averages occur in the months when changes in the ice cover occur rapidly. In the Arctic, freeze up during the month of October is associated with the most dynamic changes in ice extent, and therefore the largest differences in V3 and V2 monthly averages. In the Antarctic, the quickly melting seasonal ice during the months of December and January is accompanied by the largest differences in V3 and V2 extent values. Overall, the southern hemisphere’s ice-ocean-atmosphere system produces a more dynamic ice edge such that V3 minus V2 differences (Figure 4) are larger than the Arctic (Figure 2) on average. #### Changes were in response to user feedback There's one more point worth making, and that's that NSIDC made the changes in response to user feedback. After a section about how the monthly data had previously been prepared, the authors of the report wrote this (page 3): The user community, which carefully fact-checks the monthly-average extent values presented in the popular Arctic Sea Ice News and Analysis blog against data distributed through the Sea Ice Index was not able to come up with the same values. The discrepancy stems from the different averaging methods, and user questions prompted internal discussions on how averages should be calculated. Discussions concluded with a decision to present monthly average data based on the averages of daily extent values because this method made most sense to users. You can see from the above para that WUWT deniers could in no way be part of the NSIDC user community. They don't "carefully fact-check" anything. They deal only in spreading disinformation. Facts are anathema to deniers. #### Bringing Africa to the Arctic - the Serengeti Strategy There's still one other point I'll make, because it's typical denier behaviour. The chorus was dogwhistled by both Anthony Watts and Tom Wiita to take a shot at Mark Serreze, who is  the Director of NSIDC.  There was only one person I saw who responded to this particular whistle (so far). Singling out one individual to bash is known as the Serengeti Strategy and is widely used at WUWT. Dr Serreze didn't write the report, however. The authors were Ann Windnagel, Michael Brandt, Florence Fetterer and Walt Meier. Incidentally, Walt Meier has in the distant past made every effort to help Anthony and his rabble learn something about sea ice. He hasn't written for WUWT in a long time - neither has any other reputable scientist, not since Anthony Watts and his lynch mob treated Richard Betts and Tamsin Edwards so horrendously. #### From the WUWT comments Most people didn't bother reading the NSIDC analysis, they just weighed in because they want to believe that climate science is a hoax. What else could it be? After all, it's been very cold in much of the USA recently, and that proves something or the other. (Deniers would never accept that warmer waters mean heavier snowfalls, and maybe aided the meteorological "bomb", or that the changes in the Arctic could be causing the polar jet stream to meander a lot further south these days. (There are differing ideas among scientists about this - see this WaPo article by Chris Mooney. There's also a good article about recent US weather by Michael Mann.) Source: Climate Reanalyzer I bet many a rabid denier thinks of himself (they're predominately male) as Donald Trump just portrayed himself - as a "stable genius" (I don't think he means the horse kind), and "being, like, really smart" - oh my. The rabble at WUWT show as much decorum as the US President. Here's a sample. Hard to tell if NME666 thinks that lying scientists earn more or less than the average liar: January 5, 2018 at 3:02 pmthe difference between an average liar, and a lying scientist, is pay scale:-)) Tom in Florida also suggests the change in methodology means scientists are lying. I don't know if he ever learnt arithmetic. January 5, 2018 at 2:25 pm As the saying goes: figures lie and liars figure. ristvan seems to think that it was very clever of Tom Wiita to catch NSIDC red-handed secretly and nefariously discussing the change in version on the very public NSIDC website, together with a very detailed analysis. I think that falls under criteria 6, Self-Sealing Reasoning, or maybe criteria 7, Unreflexive Counterfactual Thinking. The other thing it shows is that Rudd Istvan doesn't like change (except, I guess, when it's UAH versions). January 5, 2018 at 2:30 pm Caught red handed again. For another example, see my guest post here 2/17/17 concerning CONUS trends and NOAA’ s shift in early 2014 to NClimDiv. Typing NClimDiv into the search bar will take you there. Jimmy Haigh knows for sure that 97% of the world is conspiring against him and his 3% in denial. It's humankind's biggest ever conspiracy in the whole wide world. He's as eloquent as his demi-god: January 5, 2018 at 3:51 pm Lying $cumb@g$. WR just knows that everything climate scientists do is nefarious. He wouldn't have it any other way. January 5, 2018 at 9:56 pm Regardless of the merits of the change in method, the fact is that if it didn’t result in lower levels of ice and an accelerated decline then the change wouldn’t have been made. They know it, and we know it, just like with the one-way temperature adjustments. These “scientists” can’t even pretend to be unbiased observers. #### References and further reading Windnagel, A., M. Brandt, F. Fetterer, and W. Meier. 2017. Sea Ice Index Version 3 Analysis. NSIDC Special Report 19. Boulder CO, USA: National Snow and Ice Data Center. http://nsidc.org/sites/nsidc.org/files/files/NSIDC-special-report-19.pdf. Curses! It's a conspiracy! The Fury is Back Thrice Over - HotWhopper article about deniers and their conspiracy theorising, with Stephan Lewandowsky et al (July 2015) #### 3 comments: 1. Today I read a claim that NSIDC is disappearing first-year ice to cook the numbers. I don’t even know where to start with that level of idiocy. 2. @ numerobis It was hard work getting all those propane space heaters up there but we are proud of the job we did. Instead of commenting as "Anonymous", please comment using "Name/URL" and your name, initials or pseudonym or whatever. You can leave the "URL" box blank. This isn't mandatory. You can also sign in using your Google ID, Wordpress ID etc as indicated. NOTE: Some Wordpress users are having trouble signing in. If that's you, try signing in using Name/URL. Details here. Click here to read the HotWhopper comment policy.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3524479269981384, "perplexity": 3465.8478681802703}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103573995.30/warc/CC-MAIN-20220628173131-20220628203131-00041.warc.gz"}
https://cvgmt.sns.it/paper/1268/
# Intrinsic regular hypersurfaces in Heisenberg groups and weak solutions of non linear first-order PDEs created by bigolin on 17 Feb 2009 [BibTeX] Ph.D. Thesis Inserted: 17 feb 2009 Year: 2009
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9409235119819641, "perplexity": 12446.341565706214}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358233.7/warc/CC-MAIN-20211127193525-20211127223525-00542.warc.gz"}
http://wikien3.appspot.com/wiki/QT_interval
# QT interval QT interval Electrocardiogram showing QT interval calculated by tangent method ICD-10-PCSR94.31 ICD-9-CM89.52 MeSHD004562 MedlinePlus003868 The QT interval is a measurement made on an electrocardiogram used to assess some of the electrical properties of the heart. It is calculated as the time from the start of the Q wave to the end of the T wave, and approximates to the time taken from when the cardiac ventricles start to contract to when they finish relaxing. An abnormally long or abnormally short QT interval is associated with an increased risk of developing abnormal heart rhythms and sudden cardiac death. Abnormalities in the QT interval can be caused by genetic conditions such as Long QT syndrome, by certain medications such as sotalol or pitolisant, by disturbances in the concentrations of certain salts within the blood such as hypokalaemia, or by hormonal imbalances such as hypothyroidism. ## Measurement Illustrations of the tangent and threshold methods of measuring the QT interval The QT interval is most commonly measured in lead II for evaluation of serial ECGs, with leads I and V5 being comparable alternatives to lead II. Leads III, aVL and V1 are generally avoided for measurement of QT interval.[1] The accurate measurement of the QT interval is subjective[2] because the end of the T wave is not always clearly defined and usually merges gradually with the baseline. QT interval in an ECG complex can be measured manually by different methods, such as the threshold method, in which the end of the T wave is determined by the point at which the component of the T wave merges with the isoelectric baseline, or the tangent method, in which the end of the T wave is determined by the intersection of a tangent line extrapolated from the T wave at the point of maximum downslope to the isoelectric baseline.[3] With the increased availability of digital ECGs with simultaneous 12-channel recording, QT measurement may also be done by the 'superimposed median beat' method. In the superimposed median beat method, a median ECG complex is constructed for each of the 12 leads. The 12 median beats are superimposed on each other and the QT interval is measured either from the earliest onset of the Q wave to the latest offset of the T wave or from the point of maximum convergence for the Q wave onset to the T wave offset.[4] ## Correction for heart rate Like the R–R interval, the QT interval is dependent on the heart rate in an obvious way (i.e., the faster the heart rate, the shorter the R–R interval and QT interval) and may be adjusted to improve the detection of patients at increased risk of ventricular arrhythmia. Modern computer-based ECG machines can easily calculate a corrected QT (QTc), but this correction may not aid in the detection of patients at increased risk of arrhythmia, as there are a number of different correction formulas. ### Bazett's formula The most commonly used QT correction formula is the Bazett's formula,[5] named after physiologist Henry Cuthbert Bazett (1885-1950),[6] calculating the heart rate-corrected QT interval (QTcB). Bazett's formula is based on observations from a study in 1920. Bazett's formula is often given in a form that returns QTc in dimensionally suspect units, square root of seconds. The mathematically correct form of Bazett's formula is: ${\displaystyle QTc_{B}={QT \over {\sqrt {RR \over (1s)}}}}$ where QTcB is the QT interval corrected for heart rate, and RR is the interval from the onset of one QRS complex to the onset of the next QRS complex. This mathematically correct formula returns the QTc in the same units as QT, generally milliseconds.[7] In some popular forms of this formula, it is assumed that QT is measured in milliseconds and that RR is measured in seconds, often derived from the heart rate (HR) as 60/HR. Therefore the result will be given in seconds per square root of milliseconds.[8] However, reporting QTc using this formula creates a "requirement regarding the units in which the original QT and RR are measured."[7] In either form, Bazett's non-linear QT correction formula is generally not considered accurate, as it over-corrects at high heart rates and under-corrects at low heart rates.[9] Bazett's correction formula is one of the most suitable QT correction formulae for neonates.[10] ### Fridericia's formula Fridericia[11] had proposed an alternative correction formula using the cube-root of RR. ${\displaystyle QTc_{F}={QT \over {\sqrt[{3}]{RR \over (1s)}}}}$ ### Sagie's formula The Framingham correction, also called as Sagie's formula based on the Framingham Heart Study, which used long-term cohort data of over 5,000 subjects, is considered a better[12] method.[13] ${\displaystyle QTlc=1000({QT/1000+0.154(1-RR)})}$ Again, here QT and QTlc are in milliseconds and RR is measured in seconds. ### Comparison of corrections A recent retrospective study suggests that Fridericia's method and the Framingham method may produce results most useful for stratifying the 30-day and 1-year risks of mortality.[12] Upper limit of normal QT interval, corrected for heart rate according to Bazett's formula,[5] Fridericia's formula,[11] and subtracting 0.02 s from QT for every 10 bpm increase in heart rate.[14] Up to 0.42 s (≤420 ms) is chosen as normal QTc of QTB and QTF in this diagram.[15] Definitions of normal QTc vary from being equal to or less than 0.40 s (≤400 ms),[14] 0.41s (≤410ms),[16] 0.42s (≤420ms)[15] or 0.44s (≤440ms).[17] For risk of sudden cardiac death, "borderline QTc" in males is 431–450 ms; and, in females, 451–470 ms. An "abnormal" QTc in males is a QTc above 450 ms; and, in females, above 470 ms.[18] If there is not a very high or low heart rate, the upper limits of QT can roughly be estimated by taking QT=QTc at a heart rate of 60 beats per minute (bpm), and subtracting 0.02s from QT for every 10 bpm increase in heart rate. For example, taking normal QTc ≤ 0.42 s, QT would be expected to be 0.42 s or less at a heart rate of 60 bpm. For a heart rate of 70 bpm, QT would roughly be expected to be equal to or below 0.40 s. Likewise, for 80 bpm, QT would roughly be expected to be equal to or below 0.38 s.[14] ## Abnormal intervals Prolonged QTc causes premature action potentials during the late phases of depolarization. This increases the risk of developing ventricular arrhythmias, including fatal ventricular fibrillation.[19] Higher rates of prolonged QTc are seen in females, older patients, high systolic blood pressure or heart rate, and short stature.[20] Prolonged QTc is also associated with ECG findings called Torsades de Pointes, which are known to degenerate into ventricular fibrillation, associated with higher mortality rates. There are many causes of prolonged QT intervals, acquired causes being more common than genetic.[21] ### Genetic causes Distribution of QT intervals amongst healthy males and females, and amongst those with congenital long QT syndrome An abnormally prolonged QT interval could be due to long QT syndrome, whereas an abnormally shortened QT interval could be due to short QT syndrome. The QTc length is associated with variations in the NOS1AP gene.[22] The autosomal recessive syndrome of Jervell and Lange-Nielsen is characterized by a prolonged QTc interval in conjunction with sensorineural hearing loss. ### Due to adverse drug reactions Prolongation of the QT interval may be due to an adverse drug reaction.[23] Many drugs, such as haloperidol,[24] vemurafenib, ziprasidone, methadone[25] sertindole,[26] and pitolisant[27] can prolong the QT interval. Some antiarrhythmic drugs, like amiodarone or sotalol, work by getting a pharmacological QT prolongation. Also, some second-generation antihistamines, such as astemizole, have this effect. In addition, high blood alcohol concentrations prolong the QT interval.[28] A possible interaction between selective serotonin reuptake inhibitors and thiazide diuretics is associated with QT prolongation.[29] Macrolide and fluoroquinolone antibiotics are also suspected to prolong the QT interval. It was discovered recently that azithromycin is associated with an increase in cardiovascular death.[30] ### Due to pathological conditions Hypothyroidism, a condition of low function of the thyroid gland, can cause QT prolongation at the electrocardiogram. Acute hypocalcemia causes prolongation of the QT interval, which may lead to ventricular dysrhythmias. A shortened QT can be associated with hypercalcemia.[31] ### Use in drug approval studies Since 2005, the FDA and European regulators have required that nearly all new molecular entities be evaluated in a Thorough QT (TQT) study to determine a drug's effect on the QT interval.[32] The TQT study serves to assess the potential arrhythmia liability of a drug. Traditionally, the QT interval has been evaluated by having an individual human reader measure approximately nine cardiac beats per clinical timepoint. However, a number of recent drug approvals have used a highly automated approach, blending automated software algorithms with expert human readers reviewing a portion of the cardiac beats, to enable the assessment of significantly more beats per timepoint in order to improve precision and reduce cost.[33] As the pharmaceutical industry has gained experience in performing TQT studies, it has also become evident that traditional QT correction formulas such as QTcF, QTcB, and QTcLC may not always be suitable for evaluation of drugs impacting autonomic tone.[34] ### As a predictor of mortality Electrocardiography is a safe and noninvasive tool that can be used to identify those with a higher risk of mortality. In the general population, there has been no consistent evidence that prolonged QTc interval in isolation is associated with an increase in mortality from cardiovascular disease.[35] However, several studies[which?] have examined prolonged QT interval as a predictor of mortality for diseased subsets of the population. ### Rheumatoid arthritis Rheumatoid arthritis is the most common inflammatory arthritis.[36] Studies have linked rheumatoid arthritis with increased death from cardiovascular disease.[36] In a 2014 study,[19] Panoulas et al. found a 50 ms increase in QTc interval increased the odds of all-cause mortality by 2.17 in patients with rheumatoid arthritis. Patients with the highest QTc interval (> 424 ms) had higher mortality than those with a lower QTc interval. The association was lost when calculations were adjusted for C-reactive protein levels. The researchers proposed that inflammation prolonged the QTc interval and created arrhythmias that were associated with higher mortality rates. However, the mechanism by which C-reactive protein is associated with the QTc interval is still not understood. ### Type 1 diabetes Compared to the general population, type 1 diabetes may increase the risk of mortality, due largely to an increased risk of cardiovascular disease.[20][37] Almost half of patients with type 1 diabetes have a prolonged QTc interval (> 440 ms).[20] Diabetes with a prolonged QTc interval was associated with a 29% mortality over 10 years in comparison to 19% with a normal QTc interval.[20] Anti-hypertensive drugs increased the QTc interval, but were not an independent predictor of mortality.[20] ### Type 2 diabetes QT interval dispersion (QTd) is the maximum QT interval minus the minimum QT interval, and is linked with ventricular repolarization.[38] A QTd over 80 ms is considered abnormally prolonged.[39] Increased QTd is associated with mortality in type 2 diabetes.[39] QTd is a better predictor of cardiovascular death than QTc, which was unassociated with mortality in type 2 diabetes.[39] QTd higher than 80 ms had a relative risk of 1.26 of dying from cardiovascular disease compared to a normal QTd. ## References 1. ^ Panicker GK, Salvi V, Karnad DR, Chakraborty S, Manohar D, Lokhandwala Y, Kothari S (2014). "Drug-induced QT prolongation when QT interval is measured in each of the 12 ECG leads in men and women in a thorough QT study". J Electrocardiol. 47 (47(2)): 155–157. doi:10.1016/j.jelectrocard.2013.11.004. PMID 24388488. 2. ^ Panicker GK, Karnad DR, Joshi R, Shetty S, Vyas N, Kothari S, Narula D (2009). "Z-score for benchmarking reader competence in a central ECG laboratory". Ann Noninvasive Electrocardiol. 14 (14(1)): 19–25. doi:10.1111/j.1542-474X.2008.00269.x. PMID 19149789. 3. ^ Panicker GK, Karnad DR, Natekar M, Kothari S, Narula D, Lokhandwala Y (2009). "Intra- and interreader variability in QT interval measurement by tangent and threshold methods in a central electrocardiogram laboratory". J Electrocardiol. 42 (42(4)): 348–52. doi:10.1016/j.jelectrocard.2009.01.003. PMID 19261293. 4. ^ Salvi V, Karnad DR, Panicker GK, Natekar M, Hingorani P, Kerkar V, Ramasamy A, de Vries M, Zumbrunnen T, Kothari S, Narula D (2011). "Comparison of 5 methods of QT interval measurements on electrocardiograms from a thorough QT/QTc study: effect on assay sensitivity and categorical outliers". J Electrocardiol. 44 (44(2)): 96–104. doi:10.1016/j.jelectrocard.2010.11.010. PMID 21238976. 5. ^ a b Bazett HC. (1920). "An analysis of the time-relations of electrocardiograms". Heart (7): 353–370. 6. ^ Roguin, A (Mar 2011). "Henry Cuthbert Bazett (1885-1950)--the man behind the QT interval correction formula". Pacing Clin Electrophysiol. 34 (3): 384–8. doi:10.1111/j.1540-8159.2010.02973.x. PMID 21091739. 7. ^ a b Molnar, Janos; Weiss, Jerry; Rosenthal, James (1 March 1995). "The missing second: What is the correct unit for the Bazett corrected QT interval?". American Journal of Cardiology. 75 (7): 537–538. doi:10.1016/S0002-9149(99)80603-1. 8. ^ Salvi V, Karnad DR, Panicker GK, Kothari S (2010). "Update on the evaluation of a new drug for effects on cardiac repolarization in humans: issues in early drug development". Br J Pharmacol. 159 (1): 34–48. doi:10.1111/j.1476-5381.2009.00427.x. PMC 2823350. PMID 19775279.} 9. ^ Salvi V, Karnad DR, Panicker GK, Kothari S (2010). "Update on the evaluation of a new drug for effects on cardiac repolarization in humans: issues in early drug development". Br J Pharmacol. 159 (1): 34–48. doi:10.1111/j.1476-5381.2009.00427.x. PMC 2823350. PMID 19775279. 10. ^ Stramba-Badiale M, Karnad DR, Goulene KM, Panicker GK, Dagradi F, Spazzolini C, Kothari S, Lokhandwala YY, Schwartz PJ (2018). "For neonatal ECG screening there is no reason to relinquish old Bazett's correction". Eur Heart J. 39 (31): 2888–2895. doi:10.1093/eurheartj/ehy284. PMID 29860404. 11. ^ a b Fridericia LS (1920). "The duration of systole in the electrocardiogram of normal subjects and of patients with heart disease". Acta Medica Scandinavica (53): 469–486. 12. ^ a b Vandenberk B, Vandael E, Robyns T, Vandenberghe J, Garweg C, Foulon V, Ector J, Willems R (2016-06-17), "Which QT Correction Formulae to Use for QT Monitoring?", Journal of the American Heart Association, 5 (6), doi:10.1161/JAHA.116.003264, PMC 4937268, PMID 27317349 13. ^ Sagie A, Larson MG, Goldberg RJ, Bengston JR, Levy D (1992). "An improved method for adjusting the QT interval for heart rate (the Framingham Heart Study)". Am J Cardiol. 70 (7): 797–801. doi:10.1016/0002-9149(92)90562-D. PMID 1519533.[need full text] 14. ^ a b c Lesson III. Characteristics of the Normal ECG Frank G. Yanowitz, MD. Professor of Medicine. University of Utah School of Medicine. Retrieved on Mars 23, 2010 15. ^ a b Dr Dean Jenkins and Dr Stephen Gerred. "A normal adult 12-lead ECG". ecglibrary.com. Retrieved on Jan 28, 2018 16. ^ Loyola University Chicago Stritch School of Medicine > Medicine I By Matthew Fitz, M.D. Retrieved on Mars 23, 2010 17. ^ Image for Cardiovascular Physiology Concepts > Electrocardiogram (EKG, ECG) By Richard E Klabunde PhD 18. ^ 19. ^ a b Panoulas VF, Toms TE, Douglas KM, et al. (January 2014). "Prolonged QTc interval predicts all-cause mortality in patients with rheumatoid arthritis: an association driven by high inflammatory burden". Rheumatology. 53 (1): 131–7. doi:10.1093/rheumatology/ket338. PMID 24097136. 20. Rossing P, Breum L, Major-Pedersen A, et al. (March 2001). "Prolonged QTc interval predicts mortality in patients with Type 1 diabetes mellitus". Diabetic Medicine. 18 (3): 199–205. doi:10.1046/j.1464-5491.2001.00446.x. PMID 11318840. 21. ^ Van Noord, C; Eijgelsheim, M; Stricker, BHC (2010). "Drug- and non-drug-associated QT interval prolongation". British Journal of Clinical Pharmacology. 70 (1): 16–23. doi:10.1111/j.1365-2125.2010.03660.x. PMC 2909803. PMID 20642543. 22. ^ Arking DE, Pfeufer A, Post W, et al. (June 2006). "A common genetic variant in the NOS1 regulator NOS1AP modulates cardiac repolarization". Nature Genetics. 38 (6): 644–51. doi:10.1038/ng1790. PMID 16648850. 23. ^ Leitch A, McGinness P, Wallbridge D (September 2007). "Calculate the QT interval in patients taking drugs for dementia". BMJ (Clinical Research Ed.). 335 (7619): 557. doi:10.1136/bmj.39020.710602.47. PMC 1976518. PMID 17855324. 24. ^ "Information for Healthcare Professionals: Haloperidol (marketed as Haldol, Haldol Decanoate and Haldol Lactate)". Archived from the original on 2007-10-11. Retrieved 2007-09-18. 25. ^ Haigney, Mark. "Cardiotoxicity of methadone" (PDF). Director of Cardiology. Retrieved 21 February 2013. 26. ^ Lewis R, Bagnall AM, Leitner M (2005). "Sertindole for schizophrenia". The Cochrane Database of Systematic Reviews (3): CD001715. doi:10.1002/14651858.CD001715.pub2. PMID 16034864. 27. ^ https://wakix.com "WAKIX prolongs the QT interval; avoid use of WAKIX in patients with known QT prolongation or in combination with other drugs known to prolong QT interval." 28. ^ Aasebø W, Erikssen J, Jonsbu J, Stavem K (April 2007). "ECG changes in patients with acute ethanol intoxication". Scandinavian Cardiovascular Journal. 41 (2): 79–84. doi:10.1080/14017430601091698. PMID 17454831. 29. ^ Tatonetti NP, Ye PP, Daneshjou R, Altman RB (March 2012). "Data-driven prediction of drug effects and interactions". Science Translational Medicine. 4 (125): 125ra31. doi:10.1126/scitranslmed.3003377. PMC 3382018. PMID 22422992. 30. ^ 31. ^ 32. ^ "Archived copy" (PDF). Archived from the original (PDF) on March 6, 2010. Retrieved December 9, 2009.CS1 maint: archived copy as title (link)[full citation needed] 33. ^ "iCardiac Applies Automated Approach to Thorough QT Study for a Leading Pharmaceutical Company - Applied Clinical Trials". 5 October 2011. Archived from the original on 5 October 2011. Retrieved 19 March 2018.CS1 maint: BOT: original-url status unknown (link) 34. ^ "Garnett" (PDF). Retrieved 6 June 2014. 35. ^ Montanez A, Ruskin JN, Hebert PR, Lamas GA, Hennekens CH (May 2004). "Prolonged QTc interval and risks of total and cardiovascular mortality and sudden death in the general population: a review and qualitative overview of the prospective cohort studies". Archives of Internal Medicine. 164 (9): 943–8. doi:10.1001/archinte.164.9.943. PMID 15136301. 36. ^ a b Solomon DH, Karlson EW, Rimm EB, et al. (March 2003). "Cardiovascular morbidity and mortality in women diagnosed with rheumatoid arthritis". Circulation. 107 (9): 1303–7. doi:10.1161/01.cir.0000054612.26458.b2. PMID 12628952. 37. ^ Borch-Johnsen K, Andersen PK, Deckert T (August 1985). "The effect of proteinuria on relative mortality in type 1 (insulin-dependent) diabetes mellitus". Diabetologia. 28 (8): 590–6. doi:10.1007/bf00281993. PMID 4054448. 38. ^ Okin PM, Devereux RB, Howard BV, Fabsitz RR, Lee ET, Welty TK (2000). "Assessment of QT interval and QT dispersion for prediction of all-cause and cardiovascular mortality in American Indians: The Strong Heart Study". Circulation. 101 (1): 61–66. doi:10.1161/01.cir.101.1.61. PMID 10618305. 39. ^ a b c Giunti S, Gruden G, Fornengo P, et al. (March 2012). "Increased QT interval dispersion predicts 15-year cardiovascular mortality in type 2 diabetic subjects: the population-based Casale Monferrato Study". Diabetes Care. 35 (3): 581–3. doi:10.2337/dc11-1397. PMC 3322722. PMID 22301117. 40. ^ Baumert, Mathias; Porta, Alberto; Vos, Marc A.; Malik, Marek; Couderc, Jean-Philippe; Laguna, Pablo; Piccirillo, Gianfranco; Smith, Godfrey L.; Tereshchenko, Larisa G. (2016-06-01). "QT interval variability in body surface ECG: measurement, physiological basis, and clinical value: position statement and consensus guidance endorsed by the European Heart Rhythm Association jointly with the ESC Working Group on Cardiac Cellular Electrophysiology". EP Europace. 18 (6): 925–944. doi:10.1093/europace/euv405. ISSN 1099-5129. PMC 4905605. PMID 26823389.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 3, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8317455053329468, "perplexity": 15491.04713194386}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540481281.1/warc/CC-MAIN-20191205164243-20191205192243-00176.warc.gz"}
https://es.wikidoc.org/index.php/Electron_configuration
# Electron configuration Electron atomic and molecular orbitals In atomic physics and quantum chemistry, the electron configuration is the arrangement of electrons in an atom, molecule, or other physical structure (e.g., a crystal). Like other elementary particles, the electron is subject to the laws of quantum mechanics, and exhibits both particle-like and wave-like nature. Formally, the quantum state of a particular electron is defined by its wavefunction, a complex-valued function of space and time. According to the Copenhagen interpretation of quantum mechanics, the position of a particular electron is not well defined until an act of measurement causes it to be detected. The probability that the act of measurement will detect the electron at a particular point in space is proportional to the square of the absolute value of the wavefunction at that point. Electrons are able to move from one energy level to another by emission or absorption of a quantum of energy, in the form of a photon. Because of the Pauli exclusion principle, no more than two electrons may exist in a given atomic orbital; therefore an electron may only leap to another orbital if there is a vacancy there. Knowledge of the electron configuration of different atoms is useful in understanding the structure of the periodic table of elements. The concept is also useful for describing the chemical bonds that hold atoms together. In bulk materials this same idea helps explain the peculiar properties of lasers and semiconductors. ## Electron configuration in atoms The discussion below presumes knowledge of material contained at Atomic orbital. ### Summary of the quantum numbers The state of an electron in an atom is given by four quantum numbers. Three of these are integers and are properties of the atomic orbital in which it sits (a more thorough explanation is given in that article). number denoted allowed range represents principal quantum number n integer, 1 or more Partly the overall energy of the orbital, and by extension its general distance from the nucleus. In short, the energy level it is in. (1+) azimuthal quantum number l integer, 0 to n-1 The orbital's angular momentum, also seen as the number of nodes in the density plot. Otherwise known as its orbital. (s=0, p=1...) magnetic quantum number m integer, -l to +l, including zero. Determines energy shift of an atomic orbital due to external magnetic field (Zeeman effect). Indicates spatial orientation. spin quantum number ms +½ or -½ (sometimes called "up" and "down") Spin is an intrinsic property of the electron and independent of the other numbers. s and l in part determine the electron's magnetic dipole moment. No two electrons in one atom can have the same set of these four quantum numbers (Pauli exclusion principle). ### Shells and subshells Shells and subshells (also called energy levels and sublevels) are defined by the quantum numbers, not by the distance of its electrons from the nucleus, or even their overall energy. In large atoms, shells above the second shell overlap (see Aufbau principle). States with the same value of n are related, and said to lie within the same electron shell. States with the same value of n and also l are said to lie within the same electron subshell, and those electrons having the same n and l are called equivalent electrons. If the states also share the same value of m, they are said to lie in the same atomic orbital. Because electrons have only two possible spin states, an atomic orbital cannot contain more than two electrons (Pauli exclusion principle). A subshell can contain up to 4l+2 electrons; a shell can contain up to 2n² electrons; where n equals the shell number. #### Worked example Here is the electron configuration for a filled fifth shell: Shell Subshell Orbitals Electrons n = 5 l = 0 m = 0 → 1 type s orbital → max 2 electrons l = 1 m = -1, 0, +1 → 3 type p orbitals → max 6 electrons l = 2 m = -2, -1, 0, +1, +2 → 5 type d orbitals → max 10 electrons l = 3 m = -3, -2, -1, 0, +1, +2, +3 → 7 type f orbitals → max 14 electrons l = 4 m = -4, -3 -2, -1, 0, +1, +2, +3, +4 → 9 type g orbitals → max 18 electrons Total: max 50 electrons This information can be written as 5s2 5p6 5d10 5f14 5g18 (see below for more details on notation). ### Notation Physicists and chemists use a standard notation to describe atomic electron configurations. In this notation, a subshell is written in the form nxy, where n is the shell number, x is the subshell label and y is the number of electrons in the subshell. An atom's subshells are written in order of increasing energy – in other words, the sequence in which they are filled (see Aufbau principle below). For instance, ground-state hydrogen has one electron in the s orbital of the first shell, so its configuration is written 1s1. Lithium has two electrons in the 1s subshell and one in the (higher-energy) 2s subshell, so its ground-state configuration is written 1s2 2s1. Phosphorus (atomic number 15), is as follows: 1s2 2s2 2p6 3s2 3p3. For atoms with many electrons, this notation can become lengthy. It is often abbreviated by noting that the first few subshells are identical to those of one or another noble gas. Phosphorus, for instance, differs from neon (1s2 2s2 2p6) only by the presence of a third shell. Thus, the electron configuration of neon is pulled out, and phosphorus is written as follows: [Ne]3s2 3p3. An even simpler version is simply to quote the number of electrons in each shell, e.g. (again for phosphorus): 2-8-5. The orbital labels s, p, d, and f originate from a now-discredited system of categorizing spectral lines as sharp, principal, diffuse, and fundamental, based on their observed fine structure. When the first four types of orbitals were described, they were associated with these spectral line types, but there were no other names. The designation g was derived by following alphabetical order. Shells with more than five subshells are theoretically permissible, but this covers all discovered elements. For mnemonic reasons, some call the s and p orbitals spherical and peripheral. ### Aufbau principle In the ground state of an atom (the condition in which it is ordinarily found), the electron configuration generally follows the Aufbau principle. According to this principle, electrons enter into states in order of the states' increasing energy; i.e., the first electron goes into the lowest-energy state, the second into the next lowest, and so on. The order in which the states are filled is as follows: ${\displaystyle s}$ ${\displaystyle p}$ ${\displaystyle d}$ ${\displaystyle f}$ ${\displaystyle g}$ 1   1 2   2 3 3   4 5 7 4   6 8 10 13 5   9 11 14 17 21 6   12 15 18 22 7   16 19 23 8   20 24 The order of increasing energy of the subshells can be constructed by going through downward-leftward diagonals of the table above (also see the diagram at the top of the page), going from the topmost diagonals to the bottom. The first (topmost) diagonal goes through 1s; the second diagonal goes through 2s; the third goes through 2p and 3s; the fourth goes through 3p and 4s; the fifth goes through 3d, 4p, and 5s; and so on. In general, a subshell that is not "s" is always followed by a "lower" subshell of the next shell; e.g. 2p is followed by 3s; 3d is followed by 4p, which is followed by 5s, 4f is followed by 5d, which is followed by 6p, and then 7s. This explains the ordering of the blocks in the periodic table. A pair of electrons with identical spins has slightly less energy than a pair of electrons with opposite spins. Since two electrons in the same orbital must have opposite spins, this causes electrons to prefer to occupy different orbitals. This preference manifests itself if a subshell with ${\displaystyle l>0}$ (one that contains more than one orbital) is less than full. For instance, if a p subshell contains four electrons, two electrons will be forced to occupy one orbital, but the other two electrons will occupy both of the other orbitals, and their spins will be equal. This phenomenon is called Hund's rule. The Aufbau principle can be applied, in a modified form, to the protons and neutrons in the atomic nucleus (see the shell model of nuclear physics). #### Orbitals table This table shows all orbital configurations up to 7s, therefore it covers the simple electronic configuration for all elements from the periodic table up to Ununbium (element 112) with the exception of Lawrencium (element 103), which would require a 7p orbital. s (l=0) p (l=1) d (l=2) f (l=3) n=1 50px n=2 50px 137px n=3 50px 137px 225px n=4 50px 137px 225px 313px n=5 50px 137px 225px n=6 50px 137px n=7 50px #### Exceptions in 3d, 4d, 5d A d subshell that is half-filled or full (ie 5 or 10 electrons) is more stable than the s subshell of the next shell. This is the case because it takes less energy to maintain an electron in a half-filled d subshell than a filled s subshell. For instance, copper (atomic number 29) has a configuration of [Ar]4s1 3d10, not [Ar]4s2 3d9 as one would expect by the Aufbau principle. Likewise, chromium (atomic number 24) has a configuration of [Ar]4s1 3d5, not [Ar]4s2 3d4 where [Ar] represents the configuration for argon. Exceptions in Period 4:[1] Element Z Electron configuration Short electron conf. Scandium 21 1s2 2s2 2p6 3s2 3p6 4s2 3d1 [Ar] 4s2 3d1 Titanium 22 1s2 2s2 2p6 3s2 3p6 4s2 3d2 [Ar] 4s2 3d2 Vanadium 23 1s2 2s2 2p6 3s2 3p6 4s2 3d3 [Ar] 4s2 3d3 Chromium 24 1s2 2s2 2p6 3s2 3p6 4s1 3d5 [Ar] 4s1 3d5 Manganese 25 1s2 2s2 2p6 3s2 3p6 4s2 3d5 [Ar] 4s2 3d5 Iron 26 1s2 2s2 2p6 3s2 3p6 4s2 3d6 [Ar] 4s2 3d6 Cobalt 27 1s2 2s2 2p6 3s2 3p6 4s2 3d7 [Ar] 4s2 3d7 Nickel 28 1s2 2s2 2p6 3s2 3p6 4s2 3d8 [Ar] 4s2 3d8 Copper 29 1s2 2s2 2p6 3s2 3p6 4s1 3d10 [Ar] 4s1 3d10 Zinc 30 1s2 2s2 2p6 3s2 3p6 4s2 3d10 [Ar] 4s2 3d10 Gallium 31 1s2 2s2 2p6 3s2 3p6 3d10 4s2 4p1 [Ar] 3d10 4s2 4p1 Exceptions in Period 5:[2] Element Z Electron configuration Short electron conf. Yttrium 39 1s2 2s2 2p6 3s2 3p6 4s2 3d10 4p6 5s2 4d1 [Kr] 5s2 4d1 Zirconium 40 1s2 2s2 2p6 3s2 3p6 4s2 3d10 4p6 5s2 4d2 [Kr] 5s2 4d2 Niobium 41 1s2 2s2 2p6 3s2 3p6 4s2 3d10 4p6 5s1 4d4 [Kr] 5s1 4d4 Molybdenum 42 1s2 2s2 2p6 3s2 3p6 4s2 3d10 4p6 5s1 4d5 [Kr] 5s1 4d5 Technetium 43 1s2 2s2 2p6 3s2 3p6 4s2 3d10 4p6 5s2 4d5 [Kr] 5s2 4d5 Ruthenium 44 1s2 2s2 2p6 3s2 3p6 4s2 3d10 4p6 5s1 4d7 [Kr] 5s1 4d7 Rhodium 45 1s2 2s2 2p6 3s2 3p6 4s2 3d10 4p6 5s1 4d8 [Kr] 5s1 4d8 Palladium 46 1s2 2s2 2p6 3s2 3p6 4s2 3d10 4p6 4d10 [Kr] 4d10 Silver 47 1s2 2s2 2p6 3s2 3p6 4s2 3d10 4p6 5s1 4d10 [Kr] 5s1 4d10 Cadmium 48 1s2 2s2 2p6 3s2 3p6 4s2 3d10 4p6 5s2 4d10 [Kr] 5s2 4d10 Indium 49 1s2 2s2 2p6 3s2 3p6 4s2 3d10 4p6 5s2 4d10 5p1 [Kr] 5s2 4d10 5p1 Exceptions in Period 6:[3] Element Z Short electron conf. Iridium 77 [Xe] 6s2 4f14 5d7 Platinum 78 [Xe] 6s1 4f14 5d9 Gold 79 [Xe] 6s1 4f14 5d10 Mercury 80 [Xe] 6s2 4f14 5d10 Thallium 81 [Xe] 6s2 4f14 5d10 6p1 ### Relation to the structure of the periodic table Electron configuration is intimately related to the structure of the periodic table. The chemical properties of an atom are largely determined by the arrangement of the electrons in its outermost "valence" shell (although other factors, such as atomic radius, atomic mass, and increased accessibility of additional electronic states also contribute to the chemistry of the elements as atomic size increases) therefore elements in the same table group are chemically similar because they contain the same number of "valence" electrons. ## Electron configuration in molecules In molecules, the situation becomes more complex, as each molecule has a different orbital structure. See the molecular orbital article and the linear combination of atomic orbitals method for an introduction and the computational chemistry article for more advanced discussions. ## Electron configuration in solids In a solid, the electron states become very numerous. They cease to be discrete, and effectively blend together into continuous ranges of possible states (an electron band). The notion of electron configuration ceases to be relevant, and yields to band theory.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6215878129005432, "perplexity": 1681.414552392276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991801.49/warc/CC-MAIN-20210515100825-20210515130825-00621.warc.gz"}
http://www.acmerblog.com/hdu-4576-robot-7620.html
2015 09-16 # Robot Michael has a telecontrol robot. One day he put the robot on a loop with n cells. The cells are numbered from 1 to n clockwise. At first the robot is in cell 1. Then Michael uses a remote control to send m commands to the robot. A command will make the robot walk some distance. Unfortunately the direction part on the remote control is broken, so for every command the robot will chose a direction(clockwise or anticlockwise) randomly with equal possibility, and then walk w cells forward. Michael wants to know the possibility of the robot stopping in the cell that cell number >= l and <= r after m commands. There are multiple test cases. Each test case contains several lines. The first line contains four integers: above mentioned n(1≤n≤200) ,m(0≤m≤1,000,000),l,r(1≤l≤r≤n). Then m lines follow, each representing a command. A command is a integer w(1≤w≤100) representing the cell length the robot will walk for this command. The input end with n=0,m=0,l=0,r=0. You should not process this test case. There are multiple test cases. Each test case contains several lines. The first line contains four integers: above mentioned n(1≤n≤200) ,m(0≤m≤1,000,000),l,r(1≤l≤r≤n). Then m lines follow, each representing a command. A command is a integer w(1≤w≤100) representing the cell length the robot will walk for this command. The input end with n=0,m=0,l=0,r=0. You should not process this test case. 3 1 1 2 1 5 2 4 4 1 2 0 0 0 0 0.5000 0.2500 dp[(i + w) % n][j] = dp[i[j-1] * 0.5; dp[i-w][j]=dp[i][j-1]*0.5;其中i-w不断的加n使它大于等于0; dp[(i + w) % n][j&1] += dp[i][!(j&1)] * 0.5; dp[x][j&1] += dp[i][!(j&1)] * 0.5;x=i-w,x也要叠加到非负; #include <cstdio> #include <iostream> #include <algorithm> using namespace std; const int N = 220; const double eps = 0.0; double dp[N][2]; int main(){ int n, m, l, r; while(cin >> n >> m >> l >> r){ if(n == 0 && m == 0 && l == 0 && r == 0) break; int w, i, j; for(i = 0;i < n;i++) dp[i][0] = dp[i][1] = eps; dp[0][0] = 1.0; for(j = 1;j <= m;j++){ scanf("%d", &w); for(i = 0;i < n;i++){ dp[(i + w) % n][j&1] += dp[i][!(j&1)] * 0.5; int x = i - w; while(x < 0) x += n; dp[x][j&1] += dp[i][!(j&1)] * 0.5; } for(i = 0;i < n;i++) dp[i][!(j&1)] = eps; } double res = 0.0; l--, r--; for(i = l;i <= r;i++) res += dp[i][m&1]; printf("%.4lf\n", res); } return 0; }
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3699356019496918, "perplexity": 6622.192122781381}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189771.94/warc/CC-MAIN-20170322212949-00186-ip-10-233-31-227.ec2.internal.warc.gz"}
http://www.ck12.org/book/CK-12-Chemistry-Second-Edition/r8/section/14.2/
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" /> # 14.2: Stoichiometric Calculations Difficulty Level: At Grade Created by: CK-12 ## Lesson Objectives The student will: • explain the importance of balancing equations before determining mole ratios. • determine mole ratios in chemical equations. • calculate the number of moles of any reactant or product from a balanced equation given the number of moles of one reactant or product. • calculate the mass of any reactant or product from a balanced equation given the mass of one reactant or product. • mole ratio ## Introduction Earlier, we explored mole relationships in balanced chemical equations. In this lesson, we will use the mole as a conversion factor to calculate moles of product from a given number of moles of reactant, or we can calculate the number of moles of reactant from a given number of moles of product. This is called a “mole-mole” calculation. We will also perform “mass-mass” calculations, which allow you to determine the mass of reactant required to produce a given amount of product, or the mass of product you can obtain from a given mass of reactant. ## Mole Ratios A mole ratio is the relationship between two components of a chemical reaction. For instance, one way we could read the following reaction is that \begin{align*}2\end{align*} moles of \begin{align*}\text{H}_{2(g)}\end{align*} react with \begin{align*}1\end{align*} mole of \begin{align*}\text{O}_{2(g)}\end{align*} to produce \begin{align*}2\end{align*} moles of \begin{align*}\text{H}_2\text{O}_{(l)}\end{align*}. \begin{align*}2 \ \text{H}_{2(g)} + \text{O}_{2(g)} \rightarrow 2 \ \text{H}_2\text{O}_{(l)}\end{align*} The mole ratio of \begin{align*}\text{H}_{2(g)}\end{align*} to \begin{align*}\text{O}_{2(g)}\end{align*} would be: \begin{align*} \frac {2 \ \text{mol} \ \text{H}_2} {1 \ \text{mol} \ \text{O}_2} \ \ \ \ \ \text{or} \ \ \ \ \ \frac {1 \ \text{mol} \ \text{O}_2} {2 \ \text{mol} \ \text{H}_2} \end{align*} What is the ratio of hydrogen molecules to water molecules? By examining the balanced chemical equation, we can see that the coefficient in front of the hydrogen is 2, while the coefficient in front of water is also 2. Therefore, the mole ratio can be written as: \begin{align*} \frac {2 \ \text{mol} \ \text{H}_2} {2 \ \text{mol} \ \text{H}_2\text{O}} \ \ \ \ \ \text{or} \ \ \ \ \ \frac {2 \ \text{mol} \ \text{H}_2\text{O}} {2 \ \text{mol} \ \text{H}_2} \end{align*} Similarly, the ratio of oxygen molecules to water molecules would be: \begin{align*} \frac {2 \ \text{mol} \ \text{H}_2\text{O}} {1 \ \text{mol} \ \text{O}_2} \ \ \ \ \ \text{or} \ \ \ \ \ \frac {1 \ \text{mol} \ \text{O}_2} {2 \ \text{mol} \ \text{H}_2\text{O}} \end{align*} In the following example, let’s try finding the mole ratios by first writing a balanced chemical equation from a “chemical sentence.” Example: Four moles of solid aluminum are mixed with three moles of gaseous oxygen to produce two moles of solid aluminum oxide. What is the mole ratio of (1) aluminum to oxygen, (2) aluminum to aluminum oxide, and (3) oxygen to aluminum oxide? Solution: Balanced chemical equation: \begin{align*}4 \ \text{Al}_{(s)} + 3 \ \text{O}_{2(g)} \rightarrow 2 \ \text{Al}_2\text{O}_{3(s)}\end{align*} 1. mole ratio of aluminum to oxygen = \begin{align*} \frac {4 \ \text{mol Al}} {3 \ \text{mol} \ \text{O}_2} \ \text{or} \ \frac {3 \ \text{mol} \ \text{O}_2} {4 \ \text{mol Al}}\end{align*} 2. mole ratio of aluminum to aluminum oxide = \begin{align*} \frac {4 \ \text{mol Al}} {2 \ \text{mol Al}_2\text{O}_3} \ \text{or} \ \frac {2 \ \text{mol Al}_2\text{O}_3} {4 \ \text{mol Al}}\end{align*} 3. mole ratio of oxygen to aluminum oxide = \begin{align*} \frac {3 \ \text{mol O}_2} {2 \ \text{mol Al}_2\text{O}_3} \ \text{or} \ \frac {2 \ \text{mol Al}_2\text{O}_3} {3 \ \text{mol O}_2}\end{align*} Example: Write the balanced chemical equation for the reaction of solid calcium carbide (\begin{align*}\text{CaC}_2\end{align*}) with water to form aqueous calcium hydroxide and acetylene (\begin{align*}\text{C}_2\text{H}_2\end{align*}) gas. When written, find the mole ratios for (1) calcium carbide to water and (2) calcium carbide to calcium hydroxide. Solution: Balanced chemical equation: \begin{align*}\text{CaC}_{2(s)} + 2 \ \text{H}_2\text{O}_{(l)} \rightarrow \text{Ca(OH)}_{2(aq)} + \ \text{C}_2\text{H}_{2(g)}\end{align*} 1. mole ratio of calcium carbide to water = \begin{align*} \frac {1 \ \text{mol CaC}_2} {2 \ \text{mol H}_2\text{O}} \ \text{or} \ \frac {2 \ \text{mol H}_2O} {1 \ \text{mol CaC}_2}\end{align*} 2. mole ratio of calcium carbide to calcium hydroxide = \begin{align*} \frac {1 \ \text{mol CaC}_2} {1 \ \text{mol Ca(OH)}_2}\end{align*} The correct mole ratios of the reactants and products in a chemical equation are determined by the balanced equation. Therefore, the chemical equation must always be balanced before the mole ratios are used for calculations. Looking at the unbalanced equation for the reaction of phosphorus trihydride with oxygen, it is difficult to guess the correct mole ratio of phosphorus trihydride to oxygen gas. \begin{align*}\text{PH}_{3(g)} + \ \text{O}_{2(g)} \rightarrow \text{P}_4\text{O}_{10(s)} + \ \text{H}_2\text{O}_{(g)}\end{align*} Once the equation is balanced, however, the mole ratio of phophorus trihydride to oxygen gas is apparent. Balanced chemical equation: \begin{align*}4 \ \text{PH}_{3(g)} + 8 \ \text{O}_{2(g)} \rightarrow \text{P}_4\text{O}_{10(s)} + 6 \ \text{H}_2\text{O}_{(g)}\end{align*} The mole ratio of phophorus trihydride to oxygen gas, then, is: \begin{align*} \frac {4 \ \text{mol PH}_3} {8 \ \text{mol O}_2}\end{align*} Keep in mind that before any mathematical calculations are made relating to a chemical equation, the equation must be balanced. ## Mole-Mole Calculations In the chemistry lab, we rarely work with exactly one mole of a chemical. In order to determine the amount of reagent (reacting chemical) necessary or the amount of product expected for a given reaction, we need to do calculations using mole ratios. Look at the following equation. If only \begin{align*}0.50 \ \text{moles}\end{align*} of magnesium hydroxide, \begin{align*}\text{Mg(OH)}_2\end{align*}, are present, how many moles of phosphoric acid, \begin{align*}\text{H}_3\text{PO}_4\end{align*}, would be required for the reaction? \begin{align*}2 \ \text{H}_3\text{PO}_4 + 3 \ \text{Mg(OH)}_2 \rightarrow \text{Mg}_3(\text{PO}_4)_2 + 6 \ \text{H}_2\text{O}\end{align*} Step 1: To determine the conversion factor, we want to convert from moles of \begin{align*}\text{Mg(OH)}_2\end{align*} to moles of \begin{align*}\text{H}_3\text{PO}_4\end{align*}. Therefore, the conversion factor is: mole ratio = \begin{align*} \frac {2 \ \text{mol H}_3\text{PO}_4} {3 \ \text{mol Mg(OH)}_2}\end{align*} Note that what we are trying to calculate is in the numerator, while what we know is in the denominator. Step 2: Use the conversion factor to answer the question. \begin{align*}(0.50 \ \text{mol Mg(OH)}_2) \cdot (\frac {2 \ \text{mol H}_3\text{PO}_4} {3 \ \text{mol Mg(OH)}_2}) = 0.33 \ \text{mol H}_3\text{PO}_4\end{align*} Therefore, if we have \begin{align*}0.50 \ \text{mol}\end{align*} of \begin{align*}\text{Mg(OH)}_2\end{align*}, we would need \begin{align*}0.33 \ \text{mol}\end{align*} of \begin{align*}\text{H}_3\text{PO}_4\end{align*} to react with all of the magnesium hydroxide. Notice if the equation was not balanced, the amount of \begin{align*}\text{H}_3\text{PO}_4\end{align*} required would have been calculated incorrectly. The ratio would have been 1:1, and we would have concluded that \begin{align*}0.5 \ \text{mol}\end{align*} of \begin{align*}\text{H}_3\text{PO}_4\end{align*} were required. Example: How many moles of sodium oxide \begin{align*}(\text{Na}_2\text{O})\end{align*} can be formed from \begin{align*}2.36 \ \text{mol}\end{align*} of sodium nitrate \begin{align*}(\text{NaNO}_3)\end{align*} using the balanced chemical equation below? \begin{align*}10 \ \text{Na} + 2 \ \text{NaNO}_3 \rightarrow 6 \ \text{Na}_2\text{O} + \ \text{N}_2\text{O}\end{align*} Solution: \begin{align*}(2.36 \ \text{mol NaNO}_3) \cdot ( \frac {6 \ \text{mol Na}_2\text{O}} {2 \ \text{mol NaNO}_3}) = 7.08 \ \text{mol Na}_2\text{O}\end{align*} Example: How many moles of sulfur are required to produce \begin{align*}5.42 \ \text{mol}\end{align*} of carbon disulfide, \begin{align*}\text{CS}_2\end{align*}, using the balanced chemical equation below? \begin{align*}\text{C} + 2 \ \text{S} \rightarrow \text{CS}_2\end{align*} Solution: \begin{align*}(5.42 \ \text{mol CS}_2) \cdot ( \frac {2 \ \text{mol S}} {1 \ \text{mol CS}_2}) = 10.84 \ \text{mol S}\end{align*} ## Mass-Mass Calculations A mass-mass calculation would allow you to solve one of the following types of problems: • Determine the mass of reactant necessary to product a given amount of product • Determine the mass of product that would be produced from a given amount of reactant • Determine the mass of reactant necessary to react completely with a second reactant As was the case for mole ratios, it is important to double check that you are using a balanced chemical equation before attempting any calculations. ### Using Proportion to Solve Stoichiometry Problems All methods for solving stoichiometry problems contain the same four steps. Step 1: Write and balance the chemical equation. Step 2: Convert the given quantity to moles. Step 3: Convert the moles of known to moles of unknown. Step 4: Convert the moles of unknown to the requested units. Step 1 has been covered in previous sections. We also just saw how to complete Step 3 in the previous section by using mole ratios. In order to complete the remaining two steps, we simply need to know how to convert between moles and the given or requested units. In this section, we will be solving “mass-mass problems,” which means that both the given value and the requested answer will both be in units of mass, usually grams. Note that if some other unit of mass is used, you should convert to grams first, and use that value for further calculations. The conversion factor between grams and moles is the molar mass (g/mol). To find the number of moles in \begin{align*}x\end{align*} grams of a substance, we divide by the molar mass, and to go back from moles to grams, we multiply by the molar mass. This process is best illustrated through examples, so let’s look at some sample problems. The balanced equation below shows the reaction between hydrogen gas and oxygen gas to produce water. Since the equation is already balanced, Step 1 is already completed. Remember that the coefficients in the balanced equation are true for moles or molecules, but not for grams. \begin{align*}2 \ \text{H}_{2(g)} + \ \text{O}_{2(g)} \rightarrow 2 \ \text{H}_2\text{O}_{(l)}\end{align*} The molar ratio in this equation is two moles of hydrogen react with one mole of oxygen to produce two moles of water. If you were told that you were going to use 2.00 moles of hydrogen in this reaction, you would also know the moles of oxygen required for the reaction and the moles of water that would be produced. It is only slightly more difficult to determine the moles of oxygen required and the moles of water produced if you were told that you will be using 0.50 mole of hydrogen. Since you are using a quarter as much hydrogen, you would need a quarter as much oxygen and produce a quarter as much water. This is because the molar ratios keep the same proportion. If you were to write out a mathematical equation to describe this problem, you would set up the following proportion: \begin{align*} \frac {x \ \text{mol O}_2} {0.50 \ \text{mol H}_2} = \frac {1 \ \text{mol O}_2} {2 \ \text{mol H}_2}\end{align*} The given quantity, 0.50 mole of hydrogen, is already in moles, so Step 2 is also completed. We set up a proportion to help complete Step 3. From the balanced equation, we know that 1 mole of oxygen reacts with 2 moles of hydrogen. Similarly, we want to determine the \begin{align*}x\end{align*} moles of oxygen needed to react with 0.50 moles of hydrogen. The set up proportion would be similar to the one above. We can then solve the proportion by multiplying the denominator from the left side to both sides of the equal sign. In this case, you will find that \begin{align*}x = 0.25 \ \text{moles of O}_2\end{align*}. Example: Pentane, \begin{align*}\text{C}_5\text{H}_{12}\end{align*}, reacts with oxygen gas to produce carbon dioxide and water. How many grams of carbon dioxide will be produced by the reaction of 108.0 grams of pentane? Step 1: Write and balance the equation. \begin{align*}\text{C}_5\text{H}_{12} + 8 \ \text{O}_2 \rightarrow 5 \ \text{CO}_2 + 6 \ \text{H}_2\text{O}\end{align*} Step 2: Convert the given quantity to moles. \begin{align*}\frac {108.0 \ \text{g}} {72.0 \ \text{g/mol}} = 1.50 \ \text{mol C}_5\text{H}_{12}\end{align*} Step 3: Set up and solve the proportion to find moles of unknown. \begin{align*} \frac {x \ \text{mol CO}_2} {1.50 \ \text{mol C}_5\text{H}_{12}} = \frac {5 \ \text{mol CO}_2} {1 \ \text{mol C}_5\text{H}_{12}}\end{align*} Therefore, \begin{align*}x \ \text{mol CO}_2 = 7.50\end{align*}. Step 4: Convert the unknown moles to requested units (grams). \begin{align*}\text{grams CO}_2 = (7.50 \ \text{mol}) \cdot (44.0 \ \text{g/mol}) = 330. \ \text{grams}\end{align*} Example: Aluminum hydroxide reacts with sulfuric acid to produce aluminum sulfate and water. How many grams of aluminum hydroxide are necessary to produce 108 grams of water? Step 1: Write and balance the equation. \begin{align*}2 \ \text{Al(OH)}_3 + 3 \ \text{H}_2\text{SO}_4 \rightarrow \text{Al}_2(\text{SO}_4)_3 + 6 \ \text{H}_2\text{O}\end{align*} Step 2: Convert the given quantity to moles. \begin{align*}\frac {108.0 \ \text{g}} {18.0 \ \text{g/mol}} = 6.00 \ \text{mol H}_2\text{O}\end{align*} Step 3: Set up and solve the proportion to find moles of unknown. \begin{align*} \frac {x \ \text{mol Al(OH)}_3} {6.00 \ \text{mol H}_2\text{O}} = \frac {2 \ \text{mol Al(OH)}_3} {6.00 \ \text{mol H}_2\text{O}}\end{align*} Therefore, \begin{align*}x \ \text{mol Al(OH)}_3 = 2.00\end{align*}. Step 4: Convert the moles of unknown to grams. \begin{align*}\text{grams Al(OH)}_3 = (2.00 \ \text{mol}) \cdot (78.0 \ \text{g/mol}) = 156 \ \text{grams}\end{align*} Example: \begin{align*}15.0\end{align*} grams of chlorine gas is bubbled through liquid sulfur to produce liquid disulfur dichloride. How much product is produced in grams? Step 1: Write and balance the chemical equation. \begin{align*}\text{Cl}_{2(g)} + 2 \ \text{S}_{(l)} \rightarrow \text{S}_2\text{Cl}_{2(l)}\end{align*} Step 2: Convert the given quantity to moles. \begin{align*}\frac {15.0 \ \text{g}} {70.9 \ \text{g/mol}} = 0.212 \ \text{mol}\end{align*} Step 3: Set up and solve the proportion to find moles of unknown. \begin{align*}\frac {x \ \text{mol S}_2\text{Cl}_2} {0.212 \ \text{mol Cl}_2} = \frac {1 \ \text{mol S}_2\text{Cl}_2} {1 \ \text{mol Cl}_2}\end{align*} Therefore, \begin{align*}x \ \text{mol S}_2\text{Cl}_2 = 0.212\end{align*}. Step 4: Convert the moles of unknown to grams. \begin{align*}\text{grams S}_2\text{Cl}_2 = (0.212 \ \text{mol}) \cdot (135 \ \text{g/mol}) = 28.6 \ \text{grams}\end{align*} Example: A thermite reaction occurs between elemental aluminum and iron(III) oxide to produce aluminum oxide and elemental iron. The reaction releases enough heat to melt the iron that is produced. If 500. g of iron is produced in the reaction, how much iron(III) oxide was used as reactant? Step 1: Write and balance the chemical equation. \begin{align*}\text{Fe}_2\text{O}_{3(s)} + 2 \ \text{Al}_{(s)} \rightarrow 2 \ \text{Fe}_{(l)} + \ \text{Al}_2\text{O}_{3(s)}\end{align*} Step 2: Convert the given quantity to moles. \begin{align*}\frac {500. \ \text{g}} {55.9 \ \text{g/mol}} = 8.95 \ \text{mol}\end{align*} Step 3: Set up and solve the proportion to find moles of unknown. \begin{align*}\frac {x \ \text{mol Fe}_2\text{O}_3} {8.95 \ \text{mol Fe}} = \frac {1 \ \text{mol Fe}_2\text{O}_3}{2 \ \text{mol Fe}}\end{align*} Therefore, \begin{align*}x \ \text{mol Fe}_2\text{O}_3 = 4.48\end{align*}. Step 4: Convert the moles of unknown to grams. \begin{align*}\text{grams Fe}_2\text{O}_3 = (4.48 \ \text{mol}) \cdot (160. \ \text{g/mol}) = 717 \ \text{grams}\end{align*} Example: Ibuprofen is a common painkiller used by many people around the globe. It has the formula \begin{align*}\text{C}_{13}\text{H}_{18}\text{O}_2\end{align*}. If \begin{align*}200. \ \text{g}\end{align*} of ibuprofen is combusted, how much carbon dioxide is produced? Step 1: Write and balance the chemical equation. \begin{align*}2 \ \text{C}_{13}\text{H}_{18}\text{O}_{2(s)} + 33 \ \text{O}_{2(s)} \rightarrow 26 \ \text{CO}_{2(g)} + 18 \ \text{H}_2\text{O}_{(l)}\end{align*} Step 2: Convert the given quantity to moles. \begin{align*}\frac {200. \ \text{g}} {206 \ \text{g/mol}} = 0.967 \ \text{mol}\end{align*} Step 3: Set up and solve the proportion to find moles of unknown. \begin{align*} \frac {x \ \text{mol CO}_2} {0.967 \ \text{mol C}_{13}\text{H}_{18}\text{O}_2} = \frac {26 \ \text{mol CO}_2} {2 \ \text{mol C}_{13}\text{H}_{18}\text{O}_2}\end{align*} Therefore, \begin{align*}x \ \text{mol CO}_2 = 12.6\end{align*}. Step 4: Convert the moles of unknown to grams. \begin{align*}\text{grams CO}_2 = (12.6 \ \text{mol}) \cdot (44.0 \ \text{g/mol}) = 554 \ \text{grams}\end{align*} Example: If sulfuric acid is mixed with sodium cyanide, the deadly gas hydrogen cyanide is produced. How much sulfuric acid must be reacted to produce \begin{align*}12.5\end{align*} grams of hydrogen cyanide? Step 1: Write and balance the chemical equation. \begin{align*}2 \ \text{NaCN}_{(s)} + \ \text{H}_2\text{SO}_{4(aq)} \rightarrow \text{Na}_2\text{SO}_{4(aq)} + 2 \ \text{HCN}_{(g)}\end{align*} Step 2: Convert the given quantity to moles. \begin{align*}\frac {12.5 \ \text{g}} {27.0 \ \text{g/mol}} = 0.463 \ \text{mol}\end{align*} Step 3: Set up and solve the proportion to find moles of unknown. \begin{align*} \frac {x \ \text{mol H}_2\text{SO}_4} {0.463 \ \text{mol HCN}} = \frac {1 \ \text{mol H}_2\text{SO}_4} {2 \ \text{mol HCN}}\end{align*} Therefore, \begin{align*}x \ \text{mol H}_2\text{SO}_4 = 0.232\end{align*}. Step 4: Convert the moles of unknown to grams. \begin{align*}\text{grams H}_2\text{SO}_4 = (0.232 \ \text{mol}) \cdot (98.1 \ \text{g/mol}) = 22.7 \ \text{grams}\end{align*} A blackboard type discussion of stoichiometry (3e) is available at http://www.youtube.com/watch?v=EdZtSSJecJc (9:21). ### Using Dimensional Analysis to Solve Stoichiometry Problems Many chemists prefer to solve stoichiometry problems with a single line of math instead of writing out the multiple steps. This can be done by using dimensional analysis, also called the factor-label method. Recall that this is simply a method that uses conversion factors to convert from one unit to another. For a review, refer to the section on dimensional analysis in the chapter “Measurement in Chemistry.” In this method, we can follow the cancellation of units to obtain the correct answer. Let’s return to some of the problems from the previous section and use dimensional analysis to solve them. For instance: \begin{align*}15.0 \ \text{g}\end{align*} of chlorine gas is bubbled through liquid sulfur to produce disulfur dichloride. How much product is produced in grams? Step 1: As always, the first step is to correctly write and balance the equation: \begin{align*}\text{Cl}_{2(g)} + 2 \ \text{S}_{(l)} \rightarrow \text{S}_2\text{Cl}_{2(l)}\end{align*} Step 2: Identify what is being given (for this question, \begin{align*}15.0 \ \text{g of Cl}_2\end{align*} is the given) and what is asked for \begin{align*}(\text{grams of S}_2\text{Cl}_2)\end{align*}. Step 3: Next, use the correct factors that allow you to cancel the units you don’t want and get the unit you do want: Example: Consider the thermite reaction again. This reaction occurs between elemental aluminum and iron(III) oxide, releasing enough heat to melt the iron that is produced. If \begin{align*}500.0 \ \text{g}\end{align*} of iron is produced in the reaction, how much iron(III) oxide was placed in the original container? Step 1: Write and balance the equation: \begin{align*}\text{Fe}_2\text{O}_{3(s)} + 2 \ \text{Al}_{(s)} \rightarrow 2 \ \text{Fe}_{(l)} + \ \text{Al}_2\text{O}_{3(s)}\end{align*} Step 2: Determine what is given and what needs to be calculated: \begin{align*} \text{given} = 500. \ \text{g of Fe} \ \ \ \ \ \text{calculate = grams of Fe}_2\text{O}_3\end{align*} Step 3: Set-up the dimensional analysis system: \begin{align*}500. \text{g Fe} \cdot \frac {1 \ \text{mol Fe}} {55.85 \ \text{g Fe}} \cdot \frac {1 \ \text{mol Fe}_2\text{O}_3} {2 \ \text{mol Fe}} \cdot \frac {159.7 \ \text{g Fe}_2\text{O}_3} {1 \ \text{mol Fe}_2\text{O}_3} = 717 \ \text{g Fe}_2\text{O}_3\end{align*} Example: Ibuprofen is a common painkiller used by many people around the globe. It has the formula \begin{align*}\text{C}_{13}\text{H}_{18}\text{O}_2\end{align*}. If \begin{align*}200. \ \text{g}\end{align*} of Ibuprofen is combusted, how much carbon dioxide is produced? Step 1: Write and balance the equation: \begin{align*}2 \ \text{C}_{13}\text{H}_{18}\text{O}_{2(s)} + 33 \ \text{O}_{2(g)} \rightarrow 26 \ \text{CO}_{2(g)} + 9 \ \text{H}_2\text{O}_{(l)}\end{align*} Step 2: Determine what is given and what needs to be calculated: \begin{align*} \text{given} = 200. \text{g of ibuprofen} \ \ \ \ \ \text{calculate = grams of CO}_2\end{align*} Step 3: Set-up the dimensional analysis system: \begin{align*}200. \ \text{g C}_{13}\text{H}_{18}\text{O}_2 \cdot \frac {1 \ \text{mol C}_{13}\text{H}_{18}\text{O}_2} {206.3 \ \text{g C}_{13}\text{H}_{18}\text{O}_2} \cdot \frac {26 \ \text{mol CO}_2} {2 \ \text{mol C}_{13}\text{H}_{18}\text{O}_2} \cdot \frac {44.1 \ \text{g CO}_2} {1 \ \text{mol CO}_2} = 555 \ \text{g CO}_2\end{align*} Example: If sulfuric acid is mixed with sodium cyanide, the deadly gas hydrogen cyanide is produced. How much sulfuric acid must be placed in the container to produce \begin{align*}12.5 \ \text{g}\end{align*} of hydrogen cyanide? Step 1: Write and balance the equation: \begin{align*}2 \ \text{NaCN}_{(s)} + \ \text{H}_2\text{SO}_{4(aq)} \rightarrow \text{Na}_2\text{SO}_{4(s)} + 2 \ \text{HCN}_{(g)}\end{align*} Step 2: Determine what is given and what needs to be calculated: \begin{align*}\text{given} = 12.5 \ \text{g HCN} \ \ \ \ \ \text{calculate = grams of H}_2\text{SO}_4\end{align*} Step 3: Set-up the dimensional analysis system: \begin{align*}12.5 \ \text{g HCN} \cdot \frac {1 \ \text{mol HCN}} {27.0 \ \text{g HCN}} \cdot \frac {1 \ \text{mol H}_2\text{SO}_4} {2 \ \text{mol HCN}} \cdot \frac {98.06 \ \text{g H}_2\text{SO}_4} {1 \ \text{mol H}_2\text{SO}_4} = 22.7 \ \text{g H}_2\text{SO}_4\end{align*} ## Lesson Summary • The coefficients in a balanced chemical equation represent the relative amounts of each substance in the reaction. • When the moles of one substance in a reaction is known, the coefficients of the balanced equation can be used to determine the moles of all the other substances. • Mass-mass calculations can be done using dimensional analysis. ## Review Questions 1. How many moles of water vapor can be produced from \begin{align*}2\ \mathrm{moles}\end{align*} of ammonia for the following reaction between ammonia and oxygen: \begin{align*}4 \ \text{NH}_{3(g)} + 5 \ \text{O}_{2(g)} \rightarrow 4 \ \text{NO}_{(g)} + 6 \ \text{H}_2\text{O}_{(g)}\end{align*}? 1. \begin{align*}3 \ \mathrm{mol}\end{align*} 2. \begin{align*}6 \ \mathrm{mol}\end{align*} 3. \begin{align*}12 \ \mathrm{mol}\end{align*} 4. \begin{align*}24 \ \mathrm{mol}\end{align*} 2. How many moles of bismuth(III) oxide can be produced from \begin{align*}0.625 \ \mathrm{mol}\end{align*} of bismuth in the following reaction: \begin{align*}\text{Bi}_{(s)} + \ \text{O}_{2(g)} \rightarrow \text{Bi}_2\text{O}_{3(s)}\end{align*}? (Note: equation may not be balanced.) 1. \begin{align*} 0.313 \ \mathrm{mol}\end{align*} 2. \begin{align*}0.625 \ \mathrm{mol}\end{align*} 3. \begin{align*}1 \ \mathrm{mol}\end{align*} 4. \begin{align*}1.25 \ \mathrm{mol}\end{align*} 5. \begin{align*}2 \ \mathrm{mol}\end{align*} 3. For the following reaction, balance the equation and then determine the mole ratio of moles of \begin{align*}\text{B(OH)}_3\end{align*} to moles of water: \begin{align*}\text{B}_2\text{O}_{3(s)} + \ \text{H}_2\text{O}_{(l)} \rightarrow \text{B(OH)}_{3(s)}\end{align*}. 1. \begin{align*}1:1\end{align*} 2. \begin{align*}2:3\end{align*} 3. \begin{align*}3:2\end{align*} 4. None of the above. 4. Write the balanced chemical equation for the reactions below and find the indicated molar ratio. 1. Gaseous propane \begin{align*}(\text{C}_3\text{H}_8)\end{align*} combusts to form gaseous carbon dioxide and water. Find the molar ratio of \begin{align*}\text{O}_2\end{align*} to \begin{align*}\text{CO}_2\end{align*}. 2. Solid lithium reacts with an aqueous solution of aluminum chloride to produce aqueous lithium chloride and solid aluminum. Find the molar ratio of \begin{align*}\text{AlCl}_{3(aq)}\end{align*} to \begin{align*}\text{LiCl}_{(aq)}\end{align*}. 3. Gaseous ethane \begin{align*}(\text{C}_2\text{H}_6)\end{align*} combusts to form gaseous carbon dioxide and water. Find the molar ratio of \begin{align*}\text{CO}_{2(g)}\end{align*} to \begin{align*}\text{O}_{2(g)}\end{align*}. 4. An aqueous solution of ammonium hydroxide reacts with an aqueous solution of phosphoric acid to produce aqueous ammonium phosphate and water. Find the molar ratio of \begin{align*}\text{H}_3\text{PO}_{4(aq)}\end{align*} to \begin{align*}\text{H}_2\text{O}_{(l)}\end{align*}. 5. Solid rubidium reacts with solid phosphorous to produce solid rubidium phosphide. Find the molar ratio of Rb(s) to P(s). 5. For the given reaction (unbalanced): \begin{align*}\text{Ca}_3(\text{PO}_4)_2 + \ \text{SiO}_2 + \ \text{C} \rightarrow \text{CaSiO}_3 + \ \text{CO} + \ \text{P}\end{align*} 1. how many moles of silicon dioxide are required to react with \begin{align*}0.35 \ \mathrm{mol}\end{align*} of carbon? 2. how many moles of calcium phosphate are required to produce \begin{align*}0.45 \ \mathrm{mol}\end{align*} of calcium silicate? 6. For the given reaction (unbalanced): \begin{align*}\text{FeS} + \ \text{O}_2 \rightarrow \text{Fe}_2\text{O}_3 + \ \text{SO}_2\end{align*} 1. how many moles of iron(III) oxide are produced from \begin{align*}1.27 \ \mathrm{mol}\end{align*} of oxygen? 2. how many moles of iron(II) sulfide are required to produce \begin{align*}3.18 \ \mathrm{mol}\end{align*} of sulfur dioxide? 7. Write the following balanced chemical equation. Ammonia and oxygen are allowed to react in a closed container to form nitrogen and water. All species present in the reaction vessel are in the gas state. 1. How many moles of ammonia are required to react with \begin{align*}4.12 \ \mathrm{mol}\end{align*} of oxygen? 2. How many moles of nitrogen are produced when \begin{align*}0.98 \ \mathrm{mol}\end{align*} of oxygen are reacted with excess ammonia? 8. How many grams of nitric acid will react with \begin{align*}2.00 \ \mathrm{g}\end{align*} of copper(II) sulfide given the following reaction between copper(II) sulfide and nitric acid: \begin{align*}3 \ \text{CuS}_{(s)} + 8 \ \text{HNO}_{3(aq)} \rightarrow 3 \text{Cu(NO}_3)_{2(aq)} + 2 \ \text{NO}_{(g)} + 4 \ \text{H}_2\text{O}_{(l)} + 3 \ \text{S}_{(s)}\end{align*}? 1. \begin{align*}0.49 \ \mathrm{g}\end{align*} 2. \begin{align*}1.31 \ \mathrm{g}\end{align*} 3. \begin{align*}3.52 \ \mathrm{g}\end{align*} 4. \begin{align*}16.0 \ \mathrm{g}\end{align*} 9. When properly balanced, what mass of iodine is needed to produce \begin{align*}2.5 \ \mathrm{g}\end{align*} of sodium iodide in the following equation: \begin{align*}\text{I}_{2(aq)} + \ \text{Na}_2\text{S}_2\text{O}_{3(aq)} \rightarrow \text{Na}_2\text{S}_4\text{O}_{6(aq)} + \ \text{NaI}_{(aq)}\end{align*}? 1. \begin{align*}1.0 \ \mathrm{g}\end{align*} 2. \begin{align*}2.1 \ \mathrm{g}\end{align*} 3. \begin{align*}2.5 \ \mathrm{g}\end{align*} 4. \begin{align*}8.5 \ \mathrm{g}\end{align*} 10. Donna was studying the following reaction for a stoichiometry project: \begin{align*}\text{S}_{(s)} + 3 \ \text{F}_{2(g)} \rightarrow \text{SF}_{6(s)}\end{align*} She wondered how much she could obtain if she used \begin{align*}3.5 \ \mathrm{g}\end{align*} of fluorine. What mass of \begin{align*}\text{SF}_6(s)\end{align*} would she obtain from the calculation using this amount of fluorine? 1. \begin{align*}3.5\ \mathrm{g}\end{align*} 2. \begin{align*}4.5 \ \mathrm{g}\end{align*} 3. \begin{align*}10.5\ \mathrm{g}\end{align*} 4. \begin{align*}13.4 \ \mathrm{g}\end{align*} 11. Aqueous solutions of aluminum sulfate and sodium phosphate are placed in a reaction vessel and allowed to react. The products of the reaction are aqueous sodium sulfate and solid aluminum phosphate. 1. Write a balanced chemical equation to represent the above reaction. 2. How many grams of sodium phosphate must be added to completely react all of \begin{align*}5.00 \ \mathrm{g}\end{align*} of aluminum sulfate? 3. If \begin{align*}3.65 \ \mathrm{g}\end{align*} of sodium phosphate were placed in the container, how many grams of sodium sulfate would be produced? 12. For the given reaction (unbalanced): \begin{align*}\text{Ca(NO}_3)_2 + \ \text{Na}_3\text{PO}_4 \rightarrow \text{Ca}_3(\text{PO}_4)_2 + \ \text{NaNO}_3\end{align*} 1. how many grams of sodium nitrate are produced from \begin{align*}0.35 \ \mathrm{g}\end{align*} of sodium phosphate? 2. how many grams of calcium nitrate are required to produce \begin{align*}5.5\ \mathrm{g}\end{align*} of calcium phosphate? 13. For the given reaction (unbalanced): \begin{align*}\text{Na}_2\text{S} + \ \text{Al(NO}_3)_3 \rightarrow \text{NaNO}_3 + \ \text{Al}_2\text{S}_3\end{align*} 1. how many grams of aluminum sulfide are produced from \begin{align*}3.25 \ \mathrm{g}\end{align*} of aluminum nitrate? 2. how many grams of sodium sulfide are required to produce \begin{align*}18.25 \ \mathrm{g}\end{align*} of aluminum sulfide? ### Notes/Highlights Having trouble? Report an issue. Color Highlighted Text Notes Please to create your own Highlights / Notes Show Hide Details Description Tags: Subjects:
{"extraction_info": {"found_math": true, "script_math_tex": 168, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9850074052810669, "perplexity": 5249.13913545948}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00021-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/quick-questions-on-work-and-energy.159991/
# Homework Help: Quick questions on work and energy 1. Mar 9, 2007 ### future_vet Can work be done on a system if there is no motion? I would say no, no motion = no energy... Is it possible for a system to have negative potential energy? I would say yes, since the choice of the zero of potential energy is arbitrary. A 500-kg elevator is pulled upward with a constant force of 550N for a distance of 50 m. What is the work done by the 550N force? From what I understand, we multiply 550N by 50 m, and get about 3.00x 104 J. 2. Mar 9, 2007 ### Dick I think you are pretty much correct. 3. Mar 9, 2007 ### Staff: Mentor I would agree with you on all three. But the first question is thought provoking. I wonder if we can think of a case where there is work done, but no motion. Certainly that is true in cases where there is no *net* motion, like spinning a wheel with friction bearings. But no motion at all....hmmm. Chemical energy conversion...is that considered work? I don't think so, but maybe someone else can think of a creative case. 4. Mar 9, 2007 Thank you! 5. Mar 9, 2007 ### AlephZero You can't do mechanical work without motion, but there are other ways to increase the energy of a system - for example adding heat energy, or storing electrical charge in a capacitor. "Increasing the energy" is the same as "doing work". Both correct.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.919135570526123, "perplexity": 1039.6166680498181}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221218122.85/warc/CC-MAIN-20180821112537-20180821132537-00257.warc.gz"}
https://kristerw.blogspot.com/2015/04/
## Sunday, April 26, 2015 ### Floating point, precision quaifiers, and optimization ESSL permits optimizations that may change the value of floating point expressions (lowp and mediump precision change, reassociation of addition/multiplication, etc.), which means that identical expressions may give different results in different shaders. This may cause problems with e.g. alignment of geometry in multi-pass algorithms, so output variables may be decorated with the invariant qualifier to force the compiler to be consistent in how it generates code for them. The compiler is still allowed to do value-changing optimizations for invariant expressions, but it need to do it in the same way for all shaders. This may give us interesting problems if optimizations and code generation are done without knowledge of each other... ### Example 1 As an example of the problems we may get with invariant, consider an application that is generating optimized SPIR-V using an offline ESSL compiler, and uses the IR with a Vulkan driver having a simple backend. The backend works on one basic block at a time, and is generating FMA (Fused Multiply-Add) instructions when multiplication is followed by addition. This is fine for invariant, even though FMA changes the precision, as the backend is consistent and always generates FMA when possible (i.e. identical expressions in different shaders will generate identical instructions). #version 310 es in float a, b, c; out invariant float result; void main() { float tmp = a * b; if (c < 0.0) { result = tmp - 1.0; } else { result = tmp + 1.0; } } This is generated exactly as written if no optimization is done; first a multiplication, followed by a compare and branch, and we have two basic blocks doing one addition each. But the offline compiler optimizes this with if-conversion, so it generates SPIR-V as if main was written as void main() { float tmp = a * b; result = (c < 0.0) ? (tmp - 1.0) : (tmp + 1.0); } The optimization has eliminated the branches, and the backend will now see that it can use FMA instructions as everything is in the same basic block. But the application has one additional shader where main looks like void main() { float tmp = a * b; if (c < 0.0) { foo(); result = tmp - 1.0; } else { result = tmp + 1.0; } } The optimization cannot transform the if-statement here, as the basic blocks are too complex. So this will not use FMA, and will therefore break the invariance guarantee. ### Example 2 It is not only invariant expressions that are problematic — you may get surprising results from normal code too when optimizations done offline and in the backend interacts in interesting ways. For example, you can get different precision in different threads from "redundant computation elimination" optimizations. This happens for cases such as mediump float tmp = a + b; if (x == 0) { /* Code not using tmp */ ... } else if (x == 1) { /* Code using tmp */ ... } else { /* Code using tmp */ ... } where tmp is calculated, but not used, for the case "x == 0". The optimization moves the tmp calculation into the two basic blocks where it is used if (x == 0) { /* Code not using tmp */ ... } else if (x == 1) { mediump float tmp = a + b; /* Code using tmp */ ... } else { mediump float tmp = a + b; /* Code using tmp */ ... } and the backend may now chose to use different precisions for the two mediump tmp calculations. ### Offline optimization with SPIR-V The examples above are of course silly — higher level optimizations should not be allowed to change control flow for invariant statements, and the "redundant computation elimination" does not make sense for warp-based architectures. But the first optimization would have been fine if used with a better backend that could combine instructions from different basic blocks. And not all GPUs are warp-based. That is, it is reasonable to do this kind of optimizations, but they need to be done in the driver where you have full knowledge about the backend and architecture. My impression is that many developers believe that SPIR-V and Vulkan implies that the driver will just do simple code generation, and that all optimizations are done offline. But that will prevent some optimizations. It may work for a game engine generating IR for a known GPU, but I'm not sure that the GPU vendors will provide enough information on their architectures/backends that this will be viable either. So my guess is that the drivers will continue to do all the current optimizations on SPIR-V too, and that offline optimizations will not matter... ## Thursday, April 9, 2015 ### Precision qualifiers in SPIR-V SPIR-V is a bit inconsistent in how it handles types for graphical shaders and compute kernels. Kernels are using sized types, and there are explicit conversions when converting between sizes. Shaders are using 32-bit types for everything, but there are precision decorations that indicates which size is really used, and conversions between sizes are done implicitly. I guess much of this is due to historical reasons in how ESSL defines its types, but I think it would be good to be more consistent in the IR. ESSL 1 played fast and loose with types. For example, it has an integer type int, but the platform is allowed to implement it as floating point, so it is not necessarily true that "a+1 != a" for a sufficiently large a. ESSL 3 strengthened the type system, so for example high precision integers are now represented as 32-bit values in two's complement form. The rest of this post will use the ESSL 3 semantics. ESSL does not care much about the size of variables; it has only one integer type "int" and one floating point type "float". But you need to specify which precision to use in calculations by adding precision qualifiers when you declare your variables, such as highp float x; Using highp means that the calculations must be done in 32-bit precision, mediump means at least 16-bit precision, and lowp means using at lest 9 bits (yes, "nine". You cannot fit a lowp value in a byte). The compiler may use any size for the variables, as long as the precision is preserved. So "mediump int" is similar to the int_least16_t type in C, but ESSL permits the compiler to use different precision for different instructions. It can for example use 16-bit precision for one mediump addition, and 32-bit for another, so it is not necessarily true that "a+b == a+b" for mediump integers a and b if the addition overflow 16 bits. The reason for having this semantics is to be able to use the hardware efficiently. Consider for example a processor having two parallel arithmetic units — one 16-bit and one 32-bit. If we have a shader where all instructions are mediump, then we could only reach 50% utilization by executing all instructions as 16-bit. But the backend can now promote half of them to 32-bit and thus be able to double the performance by using both arithmetic units. SPIR-V is representing this by always using a 32-bit type and decorating the variables and instructions with PrecisionLow, PrecisionMedium, or PrecisionHigh. The IR does not have any type conversions for the precision as the actual type is the same, and it is only the precision of the instruction that differ. But ESSL has requirements on conversions when changing precision in operations that is similar to how size change is handled in other languages: When converting from a higher precision to a lower precision, if the value is representable by the implementation of the target precision, the conversion must also be exact. If the value is not representable, the behavior is dependent on the type: • For signed and unsigned integers, the value is truncated; bits in positions not present in the target precision are set to zero. (Positions start at zero and the least significant bit is considered to be position zero for this purpose.) • For floating point values, the value should either clamp to +INF or -INF, or to the maximum or minimum value that the implementation supports. While this behavior is implementation dependent, it should be consistent for a given implementation It is of course fine to have the conversions implicit in the IR, but the conversions are explicit for the similar conversion fp32 to fp16 in kernels, so it is inconsistent. I would in general want the shader and kernel IR to be as similar as possible in order to avoid confusion when writing SPIR-V tools working on both types of IR, and I think it is possible to improve this with minor changes: • The highp precision qualifier means that the compiler must use 32-bit precision, i.e. a highp-qualified type is the same as as the normal non-qualified 32-bit type. So the PrecisionHigh does not tell the compiler anything; it just adds noise to the IR, and can be removed from SPIR-V. • Are GPUs really taking advantage of lowp for calculations? I can understand how lowp may be helpful for e.g. saving power in varying interpolation, and those cases are handled by having the PrecisionLow decoration on variables. But it seems unlikely to me that any GPU have added the extra hardware to do arithmetic in lowp precision, and I would assume all GPUs use 16-bit or higher for lowp arithmetic. If so, then PrecisionLow should not be a valid decoration for instructions. • The precision decorations are placed on instructions, but it seems better to me to have the them on the type instead. If PrecisionLow and PrecisionHigh are removed, then PrecisionMedium is the only decoration left. But this can be treated as a normal 16-bit type from the optimizers point of view, so we could instead permit both 32- and 16-bit types for graphical shaders, and specify in the execution model that it is allowed to promote 16-bit to 32-bit. Optimizations and type conversions can then be done in exactly the same way as for kernels, and the backend can promote the types as appropriate for the hardware. ## Tuesday, April 7, 2015 ### Comments on the SPIR-V provisional specification Below are some random comments/thoughts/questions from my initial reading of the SPIR-V provisional specification (revision 30). Many of my comments are that the specification is unclear. I may agree that it is obvious what the specification mean, but my experience from specification work is that it is often the case that everybody agree that it is obvious, but they do not agree on what the obvious thing is. So I think the specification need to be more detailed. Especially as one of the goals of SPIR-V is to "be targeted by new front ends for novel high-level languages", and those may generate constructs that are not possible in GLSL or OpenCL C, so it is important that all constraints are documented. Some other comments are related to tradeoffs. I think the specification is OK, so my comments are mostly highlighting some limitations (and I may have chosen a different tradeoff for some of them...). It would be great to have the rationale described for this kind of decisions. ### Const and Pure functions Functions can be marked as Const or Pure. Const is described as Compiler can assume this function has no side effects, and will not access global memory or dereference function parameters. Always computes the same result for the same argument values. while Pure is described as Compiler can assume this function has no side effect, but might read global memory or read through dereferenced function parameters. Always computes the same result for the same argument values. I assume the intention is that the compiler is allowed to optimize calls to Const functions, such as moving function calls out of loops, CSE:ing function calls, etc. And similar for the Pure functions, as long as there are no writes to global memory that may affect the result. But the specification talks about "global memory" without defining what it is. For example, is UniformConstant global variables included in this? Those cannot change, so we can do all the Const optimizations even if the function is reading from them.  And what about WorkgroupLocal? That name does not sound like global memory, but it does of course still prevent optimizations. I would suggest the specification change to explicitly list the storage classes permitted in Const and Pure functions... ### Storage Classes I'm a bit confused by the Uniform and Function storage classes... The Uniform storage class is a required capability for Shader. But the GLSL uniform is handled by the UniformConstant storage class, so what is the usage/semantics of Uniform? Function is described as "A variable local to a function" and is also a required capability for Shader. But OpenCL does also have function-local variables... How are those handled? Why are they not handled in the same way for Shader and Kernel? ### Restrict The Restrict decoration is described as Apply to a variable, to indicate the compiler may compile as if there is no aliasing. This does not give you the full picture, as you can express that pointers do not alias as described in the Memory Model section. But pointers have different semantics compared to variables, and that introduces some complications. OpenCL C defines restrict to work in the same way as for C99, and that is different from the SPIR-V specification. What C99 says is, much simplified, that a value pointed to by a restrict-qualified pointer cannot be modified through a pointer not based on that restrict-qualified pointer. So two pointers can alias if the have the correct "based-on" relationship, and are following some rules on how they are accessed. The frontend may of course decide to not decorate the pointers when it cannot express the semantics in the IR, but it is unclear to me that it is easy to detect the problematic cases. I think this needs to be clarified along the line of what the LLVM Language Reference Manual does for noalias. ### Volatile There is a Memory Access value Volatile that is described as This access cannot be optimized away; it has to be executed. This does not really make sense... The memory model is still mostly TBD in the document, but the principle in GPU programming is that you need atomics or barriers in order to make memory accesses consistent. So there is no way you can observe the difference between the compiler respecting Volatile or not. My understanding is that the rationale for Volatile in SPIR-V is to be able to work around compiler bugs by decorating memory operations with Volatile and in that way disable some compiler transformations. If so, then I think it would be useful to document this in order to make it more likely that compilers do the right thing. After all, I would expect the project manager to tell the team to do more useful work than fixing a bug for which you cannot see the difference between correct and incorrect behavior. It has historically been rather common that C compilers miscompile volatile. A typical example is for optimizations such as store forwarding, that substitutes a loaded value by a previously stored value, where the developer forgets to check for volatility when writing the optimization. So a sequence such as 7: TypeInt 32 1 15: 7(int) Constant 0 Store 14(tmp) 15 18: 7(int) IMul 16 17 Store 10(a) 18 corresponding to volatile int tmp = 0; a = b * tmp; gets the ID 17 substituted by the constant 0, and is then optimized to 7: TypeInt 32 1 15: 7(int) Constant 0 Store 14(tmp) 15 Store 10(a) 15 which is not what it is expected. But you can argue that this actually follows the SPIR-V specification — we have not optimized away the memory accesses! ### Volatile and OpenCL The OpenCL C specification says that The type qualifiers const, restrict and volatile as defined by the C99 specification are supported. which I interpret as volatile works in exactly the same way as for C99. And C99 says An object that has volatile-qualified type may be modified in ways unknown to the implementation or have other unknown side effects. Therefore any expression referring to such an object shall be evaluated strictly according to the rules of the abstract machine, as described in 5.1.2.3. Furthermore, at every sequence point the value last stored in the object shall agree with that prescribed by the abstract machine, except as modified by the unknown factors mentioned previously. What constitutes an access to an object that has volatile-qualified type is implementation-defined. That is, the compiler is not allowed to reorder volatile memory accesses, even if it know that they do not alias. So the definition of the SPIR-V Volatile need to be strengthened if that is meant to be used for implementing the OpenCL volatile. Although I guess you may get around this by a suitable implementation-defined definition of what constitutes an access to an object... ### Differences between graphical shaders and OpenCL The Validation Rules says that for graphical shaders • Scalar integer types can be parameterized only as: • – 32-bit signed – 32-bit unsigned while OpenCL cannot use signed/unsigned • OpTypeInt validation rules – The bit width operand can only be parameterized as 8, 16, 32 and 64 bit. – The sign operand must always be 0 I guess this lack of signed/unsigned information is the reason why there are Function Parameter Attributes called Zext and Sext described as Value should be zero/sign extended if needed. Both choices regarding the signed/unsigned information are fine for an IR, but why is SPIR-V treating graphics and OpenCL differently? ### Endianness Khronos thinks that SPIR-V is an in-memory format, not a file format, which means that the words are stored in the host's native byte order. But one of of the goals of SPIR-V is "enabling shared tools to generate or operate on it", so it will be passed in files between tools. The specification has a helpful hint that you can use the magic number to detect endianness, but that means that all tools need to do the (admittedly simple) extra work to handle both big and little endian. I think that the specification should define one file format encoding (preferably with a standardized file name extension), and say that all tools should use this encoding. By the way, are there really any big endian platforms in the target market?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40356892347335815, "perplexity": 1606.3765989776618}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867579.80/warc/CC-MAIN-20180625072642-20180625092642-00575.warc.gz"}
https://scoop.eduncle.com/is-evaluation-of-double-integrals-in-polar-coordinate-required-for-jam-physics
IIT JAM Follow August 7, 2020 6:51 pm 30 pts is evaluation of double integrals in polar coordinate required for JAM PHYSICS? • 0 Likes • Shares yes it's important Deepak singh it's easy to understand polar coordinate • Chandra dhawan yes dear it's important.....https://youtu.be/kNANYzuOUpA see this attachment link and file... Chandra dhawan dear see this intergal.. in this intergal we use polar coordinate and memorize also becoz it's important... • Deepak singh go through this link it's very easy. https://youtu.be/kNANYzuOUpA Deepak singh see • Vaishnavi rajora yes, it's most important • Mahak Yes it's very important topic for jam physics you can study it from hk dass or Griffiths first chapter this topic is key to each and every subject of physics • Vaishnavi rajora don't skip it. • Somnath Yes it is important.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8799758553504944, "perplexity": 17718.334450885613}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400283990.75/warc/CC-MAIN-20200927152349-20200927182349-00620.warc.gz"}
http://eprints.iisc.ernet.in/1340/
Home | About | Browse | Latest Additions | Advanced Search | Contact | Help # Properties of a mixed-valent iron compound with the kagome lattice Rao, CNR and Paul, Geo and Choudhury, Amitava and Sampathkumaran, EV and Raychaudhuri, Arup K and Ramasesha, S and Rudra, Indranil (2003) Properties of a mixed-valent iron compound with the kagome lattice. In: Physical Review B (Condensed Matter and Materials Physics ), 67 . pp. 134425-1. Preview PDF Kagome´_lattice.pdf An organically templated iron sulfate of the formula $[HN(CH_2)_6NH] [Fe^I^I^IFe_2^I^I F_6(SO_4)_2]\cdot[H_3O]$ possessing the kagome lattice has been prepared and characterized by single-crystal crystallography and other techniques. This mixed-valent iron compound shows complex magnetic properties including spin-glass behavior and magnetic hysteresis. The low-temperature specific heat data show deviation from the $T^2$ behavior found in two-dimensional frustrated systems. Simple calculations have been carried out to understand the properties of this kagomecompound.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44425153732299805, "perplexity": 10149.5134493108}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657133417.25/warc/CC-MAIN-20140914011213-00040-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
http://terrytao.wordpress.com/
This is the third thread for the Polymath8b project to obtain new bounds for the quantity $\displaystyle H_m := \liminf_{n \rightarrow\infty} (p_{n+m} - p_n),$ either for small values of ${m}$ (in particular ${m=1,2}$) or asymptotically as ${m \rightarrow \infty}$. The previous thread may be found here. The currently best known bounds on ${H_m}$ are: • (Maynard) Assuming the Elliott-Halberstam conjecture, ${H_1 \leq 12}$. • (Polymath8b, tentative) ${H_1 \leq 330}$. Assuming Elliott-Halberstam, ${H_2 \leq 330}$. • (Polymath8b, tentative) ${H_2 \leq 484{,}126}$. Assuming Elliott-Halberstam, ${H_4 \leq 493{,}408}$. • (Polymath8b) ${H_m \leq \exp( 3.817 m )}$ for sufficiently large ${m}$. Assuming Elliott-Halberstam, ${H_m \ll e^{2m} \log m}$ for sufficiently large ${m}$. Much of the current focus of the Polymath8b project is on the quantity $\displaystyle M_k = M_k({\cal R}_k) := \sup_F \frac{\sum_{m=1}^k J_k^{(m)}(F)}{I_k(F)}$ where ${F}$ ranges over square-integrable functions on the simplex $\displaystyle {\cal R}_k := \{ (t_1,\ldots,t_k) \in [0,+\infty)^k: t_1+\ldots+t_k \leq 1 \}$ with ${I_k, J_k^{(m)}}$ being the quadratic forms $\displaystyle I_k(F) := \int_{{\cal R}_k} F(t_1,\ldots,t_k)^2\ dt_1 \ldots dt_k$ and $\displaystyle J_k^{(m)}(F) := \int_{{\cal R}_{k-1}} (\int_0^{1-\sum_{i \neq m} t_i} F(t_1,\ldots,t_k)\ dt_m)^2$ $\displaystyle dt_1 \ldots dt_{m-1} dt_{m+1} \ldots dt_k.$ It was shown by Maynard that one has ${H_m \leq H(k)}$ whenever ${M_k > 4m}$, where ${H(k)}$ is the narrowest diameter of an admissible ${k}$-tuple. As discussed in the previous post, we have slight improvements to this implication, but they are currently difficult to implement, due to the need to perform high-dimensional integration. The quantity ${M_k}$ does seem however to be close to the theoretical limit of what the Selberg sieve method can achieve for implications of this type (at the Bombieri-Vinogradov level of distribution, at least); it seems of interest to explore more general sieves, although we have not yet made much progress in this direction. The best asymptotic bounds for ${M_k}$ we have are $\displaystyle \log k - \log\log\log k + O(1) \leq M_k \leq \frac{k}{k-1} \log k \ \ \ \ \ (1)$ which we prove below the fold. The upper bound holds for all ${k > 1}$; the lower bound is only valid for sufficiently large ${k}$, and gives the upper bound ${H_m \ll e^{2m} \log m}$ on Elliott-Halberstam. For small ${k}$, the upper bound is quite competitive, for instance it provides the upper bound in the best values $\displaystyle 1.845 \leq M_4 \leq 1.848$ and $\displaystyle 2.001162 \leq M_5 \leq 2.011797$ we have for ${M_4}$ and ${M_5}$. The situation is a little less clear for medium values of ${k}$, for instance we have $\displaystyle 3.95608 \leq M_{59} \leq 4.148$ and so it is not yet clear whether ${M_{59} > 4}$ (which would imply ${H_1 \leq 300}$). See this wiki page for some further upper and lower bounds on ${M_k}$. The best lower bounds are not obtained through the asymptotic analysis, but rather through quadratic programming (extending the original method of Maynard). This has given significant numerical improvements to our best bounds (in particular lowering the ${H_1}$ bound from ${600}$ to ${330}$), but we have not yet been able to combine this method with the other potential improvements (enlarging the simplex, using MPZ distributional estimates, and exploiting upper bounds on two-point correlations) due to the computational difficulty involved. (This is an extended blog post version of my talk “Ultraproducts as a Bridge Between Discrete and Continuous Analysis” that I gave at the Simons institute for the theory of computing at the workshop “Neo-Classical methods in discrete analysis“. Some of the material here is drawn from previous blog posts, notably “Ultraproducts as a bridge between hard analysis and soft analysis” and “Ultralimit analysis and quantitative algebraic geometry“‘. The text here has substantially more details than the talk; one may wish to skip all of the proofs given here to obtain a closer approximation to the original talk.) Discrete analysis, of course, is primarily interested in the study of discrete (or “finitary”) mathematical objects: integers, rational numbers (which can be viewed as ratios of integers), finite sets, finite graphs, finite or discrete metric spaces, and so forth. However, many powerful tools in mathematics (e.g. ergodic theory, measure theory, topological group theory, algebraic geometry, spectral theory, etc.) work best when applied to continuous (or “infinitary”) mathematical objects: real or complex numbers, manifolds, algebraic varieties, continuous topological or metric spaces, etc. In order to apply results and ideas from continuous mathematics to discrete settings, there are basically two approaches. One is to directly discretise the arguments used in continuous mathematics, which often requires one to keep careful track of all the bounds on various quantities of interest, particularly with regard to various error terms arising from discretisation which would otherwise have been negligible in the continuous setting. The other is to construct continuous objects as limits of sequences of discrete objects of interest, so that results from continuous mathematics may be applied (often as a “black box”) to the continuous limit, which then can be used to deduce consequences for the original discrete objects which are quantitative (though often ineffectively so). The latter approach is the focus of this current talk. The following table gives some examples of a discrete theory and its continuous counterpart, together with a limiting procedure that might be used to pass from the former to the latter: (Discrete) (Continuous) (Limit method) Ramsey theory Topological dynamics Compactness Density Ramsey theory Ergodic theory Furstenberg correspondence principle Graph/hypergraph regularity Measure theory Graph limits Polynomial regularity Linear algebra Ultralimits Structural decompositions Hilbert space geometry Ultralimits Fourier analysis Spectral theory Direct and inverse limits Quantitative algebraic geometry Algebraic geometry Schemes Discrete metric spaces Continuous metric spaces Gromov-Hausorff limits Approximate group theory Topological group theory Model theory As the above table illustrates, there are a variety of different ways to form a limiting continuous object. Roughly speaking, one can divide limits into three categories: • Topological and metric limits. These notions of limits are commonly used by analysts. Here, one starts with a sequence (or perhaps a net) of objects ${x_n}$ in a common space ${X}$, which one then endows with the structure of a topological space or a metric space, by defining a notion of distance between two points of the space, or a notion of open neighbourhoods or open sets in the space. Provided that the sequence or net is convergent, this produces a limit object ${\lim_{n \rightarrow \infty} x_n}$, which remains in the same space, and is “close” to many of the original objects ${x_n}$ with respect to the given metric or topology. • Categorical limits. These notions of limits are commonly used by algebraists. Here, one starts with a sequence (or more generally, a diagram) of objects ${x_n}$ in a category ${X}$, which are connected to each other by various morphisms. If the ambient category is well-behaved, one can then form the direct limit ${\varinjlim x_n}$ or the inverse limit ${\varprojlim x_n}$ of these objects, which is another object in the same category ${X}$, and is connected to the original objects ${x_n}$ by various morphisms. • Logical limits. These notions of limits are commonly used by model theorists. Here, one starts with a sequence of objects ${x_{\bf n}}$ or of spaces ${X_{\bf n}}$, each of which is (a component of) a model for given (first-order) mathematical language (e.g. if one is working in the language of groups, ${X_{\bf n}}$ might be groups and ${x_{\bf n}}$ might be elements of these groups). By using devices such as the ultraproduct construction, or the compactness theorem in logic, one can then create a new object ${\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}}$ or a new space ${\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}}$, which is still a model of the same language (e.g. if the spaces ${X_{\bf n}}$ were all groups, then the limiting space ${\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}}$ will also be a group), and is “close” to the original objects or spaces in the sense that any assertion (in the given language) that is true for the limiting object or space, will also be true for many of the original objects or spaces, and conversely. (For instance, if ${\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}}$ is an abelian group, then the ${X_{\bf n}}$ will also be abelian groups for many ${{\bf n}}$.) The purpose of this talk is to highlight the third type of limit, and specifically the ultraproduct construction, as being a “universal” limiting procedure that can be used to replace most of the limits previously mentioned. Unlike the topological or metric limits, one does not need the original objects ${x_{\bf n}}$ to all lie in a common space ${X}$ in order to form an ultralimit ${\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}}$; they are permitted to lie in different spaces ${X_{\bf n}}$; this is more natural in many discrete contexts, e.g. when considering graphs on ${{\bf n}}$ vertices in the limit when ${{\bf n}}$ goes to infinity. Also, no convergence properties on the ${x_{\bf n}}$ are required in order for the ultralimit to exist. Similarly, ultraproduct limits differ from categorical limits in that no morphisms between the various spaces ${X_{\bf n}}$ involved are required in order to construct the ultraproduct. With so few requirements on the objects ${x_{\bf n}}$ or spaces ${X_{\bf n}}$, the ultraproduct construction is necessarily a very “soft” one. Nevertheless, the construction has two very useful properties which make it particularly useful for the purpose of extracting good continuous limit objects out of a sequence of discrete objects. First of all, there is Łos’s theorem, which roughly speaking asserts that any first-order sentence which is asymptotically obeyed by the ${x_{\bf n}}$, will be exactly obeyed by the limit object ${\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}}$; in particular, one can often take a discrete sequence of “partial counterexamples” to some assertion, and produce a continuous “complete counterexample” that same assertion via an ultraproduct construction; taking the contrapositives, one can often then establish a rigorous equivalence between a quantitative discrete statement and its qualitative continuous counterpart. Secondly, there is the countable saturation property that ultraproducts automatically enjoy, which is a property closely analogous to that of compactness in topological spaces, and can often be used to ensure that the continuous objects produced by ultraproduct methods are “complete” or “compact” in various senses, which is particularly useful in being able to upgrade qualitative (or “pointwise”) bounds to quantitative (or “uniform”) bounds, more or less “for free”, thus reducing significantly the burden of “epsilon management” (although the price one pays for this is that one needs to pay attention to which mathematical objects of study are “standard” and which are “nonstandard”). To achieve this compactness or completeness, one sometimes has to restrict to the “bounded” portion of the ultraproduct, and it is often also convenient to quotient out the “infinitesimal” portion in order to complement these compactness properties with a matching “Hausdorff” property, thus creating familiar examples of continuous spaces, such as locally compact Hausdorff spaces. Ultraproducts are not the only logical limit in the model theorist’s toolbox, but they are one of the simplest to set up and use, and already suffice for many of the applications of logical limits outside of model theory. In this post, I will set out the basic theory of these ultraproducts, and illustrate how they can be used to pass between discrete and continuous theories in each of the examples listed in the above table. Apart from the initial “one-time cost” of setting up the ultraproduct machinery, the main loss one incurs when using ultraproduct methods is that it becomes very difficult to extract explicit quantitative bounds from results that are proven by transferring qualitative continuous results to the discrete setting via ultraproducts. However, in many cases (particularly those involving regularity-type lemmas) the bounds are already of tower-exponential type or worse, and there is arguably not much to be lost by abandoning the explicit quantitative bounds altogether. This is the second thread for the Polymath8b project to obtain new bounds for the quantity $\displaystyle H_m := \liminf_{n \rightarrow\infty} (p_{n+m} - p_n),$ either for small values of ${m}$ (in particular ${m=1,2}$) or asymptotically as ${m \rightarrow \infty}$. The previous thread may be found here. The currently best known bounds on ${H_m}$ are: • (Maynard) ${H_1 \leq 600}$. • (Polymath8b, tentative) ${H_2 \leq 484,276}$. • (Polymath8b, tentative) ${H_m \leq \exp( 3.817 m )}$ for sufficiently large ${m}$. • (Maynard) Assuming the Elliott-Halberstam conjecture, ${H_1 \leq 12}$, ${H_2 \leq 600}$, and ${H_m \ll m^3 e^{2m}}$. Following the strategy of Maynard, the bounds on ${H_m}$ proceed by combining four ingredients: 1. Distribution estimates ${EH[\theta]}$ or ${MPZ[\varpi,\delta]}$ for the primes (or related objects); 2. Bounds for the minimal diameter ${H(k)}$ of an admissible ${k}$-tuple; 3. Lower bounds for the optimal value ${M_k}$ to a certain variational problem; 4. Sieve-theoretic arguments to convert the previous three ingredients into a bound on ${H_m}$. Accordingly, the most natural routes to improve the bounds on ${H_m}$ are to improve one or more of the above four ingredients. Ingredient 1 was studied intensively in Polymath8a. The following results are known or conjectured (see the Polymath8a paper for notation and proofs): • (Bombieri-Vinogradov) ${EH[\theta]}$ is true for all ${0 < \theta < 1/2}$. • (Polymath8a) ${MPZ[\varpi,\delta]}$ is true for ${\frac{600}{7} \varpi + \frac{180}{7}\delta < 1}$. • (Polymath8a, tentative) ${MPZ[\varpi,\delta]}$ is true for ${\frac{1080}{13} \varpi + \frac{330}{13} \delta < 1}$. • (Elliott-Halberstam conjecture) ${EH[\theta]}$ is true for all ${0 < \theta < 1}$. Ingredient 2 was also studied intensively in Polymath8a, and is more or less a solved problem for the values of ${k}$ of interest (with exact values of ${H(k)}$ for ${k \leq 342}$, and quite good upper bounds for ${H(k)}$ for ${k < 5000}$, available at this page). So the main focus currently is on improving Ingredients 3 and 4. For Ingredient 3, the basic variational problem is to understand the quantity $\displaystyle M_k({\cal R}_k) := \sup_F \frac{\sum_{m=1}^k J_k^{(m)}(F)}{I_k(F)}$ for ${F: {\cal R}_k \rightarrow {\bf R}}$ bounded measurable functions, not identically zero, on the simplex $\displaystyle {\cal R}_k := \{ (t_1,\ldots,t_k) \in [0,+\infty)^k: t_1+\ldots+t_k \leq 1 \}$ with ${I_k, J_k^{(m)}}$ being the quadratic forms $\displaystyle I_k(F) := \int_{{\cal R}_k} F(t_1,\ldots,t_k)^2\ dt_1 \ldots dt_k$ and $\displaystyle J_k^{(m)}(F) := \int_{{\cal R}_{k-1}} (\int_0^{1-\sum_{i \neq m} t_i} F(t_1,\ldots,t_k)\ dt_i)^2 dt_1 \ldots dt_{m-1} dt_{m+1} \ldots dt_k.$ Equivalently, one has $\displaystyle M_k({\cal R}_k) := \sup_F \frac{\int_{{\cal R}_k} F {\cal L}_k F}{\int_{{\cal R}_k} F^2}$ where ${{\cal L}_k: L^2({\cal R}_k) \rightarrow L^2({\cal R}_k)}$ is the positive semi-definite bounded self-adjoint operator $\displaystyle {\cal L}_k F(t_1,\ldots,t_k) = \sum_{m=1}^k \int_0^{1-\sum_{i \neq m} t_i} F(t_1,\ldots,t_{m-1},s,t_{m+1},\ldots,t_k)\ ds,$ so ${M_k}$ is the operator norm of ${{\cal L}}$. Another interpretation of ${M_k({\cal R}_k)}$ is that the probability that a rook moving randomly in the unit cube ${[0,1]^k}$ stays in simplex ${{\cal R}_k}$ for ${n}$ moves is asymptotically ${(M_k({\cal R}_k)/k + o(1))^n}$. We now have a fairly good asymptotic understanding of ${M_k({\cal R}_k)}$, with the bounds $\displaystyle \log k - 2 \log\log k -2 \leq M_k({\cal R}_k) \leq \log k + \log\log k + 2$ holding for sufficiently large ${k}$. There is however still room to tighten the bounds on ${M_k({\cal R}_k)}$ for small ${k}$; I’ll summarise some of the ideas discussed so far below the fold. For Ingredient 4, the basic tool is this: Theorem 1 (Maynard) If ${EH[\theta]}$ is true and ${M_k({\cal R}_k) > \frac{2m}{\theta}}$, then ${H_m \leq H(k)}$. Thus, for instance, it is known that ${M_{105} > 4}$ and ${H(105)=600}$, and this together with the Bombieri-Vinogradov inequality gives ${H_1\leq 600}$. This result is proven in Maynard’s paper and an alternate proof is also given in the previous blog post. We have a number of ways to relax the hypotheses of this result, which we also summarise below the fold. For each natural number ${m}$, let ${H_m}$ denote the quantity $\displaystyle H_m := \liminf_{n \rightarrow\infty} (p_{n+m} - p_n),$ where ${p_n}$ denotes the ${n\textsuperscript{th}}$ prime. In other words, ${H_m}$ is the least quantity such that there are infinitely many intervals of length ${H_m}$ that contain ${m+1}$ or more primes. Thus, for instance, the twin prime conjecture is equivalent to the assertion that ${H_1 = 2}$, and the prime tuples conjecture would imply that ${H_m}$ is equal to the diameter of the narrowest admissible tuple of cardinality ${m+1}$ (thus we conjecturally have ${H_1 = 2}$, ${H_2 = 6}$, ${H_3 = 8}$, ${H_4 = 12}$, ${H_5 = 16}$, and so forth; see this web page for further continuation of this sequence). In 2004, Goldston, Pintz, and Yildirim established the bound ${H_1 \leq 16}$ conditional on the Elliott-Halberstam conjecture, which remains unproven. However, no unconditional finiteness of ${H_1}$ was obtained (although they famously obtained the non-trivial bound ${p_{n+1}-p_n = o(\log p_n)}$), and even on the Elliot-Halberstam conjecture no finiteness result on the higher ${H_m}$ was obtained either (although they were able to show ${p_{n+2}-p_n=o(\log p_n)}$ on this conjecture). In the recent breakthrough of Zhang, the unconditional bound ${H_1 \leq 70,000,000}$ was obtained, by establishing a weak partial version of the Elliott-Halberstam conjecture; by refining these methods, the Polymath8 project (which I suppose we could retroactively call the Polymath8a project) then lowered this bound to ${H_1 \leq 4,680}$. With the very recent preprint of James Maynard, we have the following further substantial improvements: Theorem 1 (Maynard’s theorem) Unconditionally, we have the following bounds: • ${H_1 \leq 600}$. • ${H_m \leq C m^3 e^{4m}}$ for an absolute constant ${C}$ and any ${m \geq 1}$. If one assumes the Elliott-Halberstam conjecture, we have the following improved bounds: • ${H_1 \leq 12}$. • ${H_2 \leq 600}$. • ${H_m \leq C m^3 e^{2m}}$ for an absolute constant ${C}$ and any ${m \geq 1}$. The final conclusion ${H_m \leq C m^3 e^{2m}}$ on Elliott-Halberstam is not explicitly stated in Maynard’s paper, but follows easily from his methods, as I will describe below the fold. (At around the same time as Maynard’s work, I had also begun a similar set of calculations concerning ${H_m}$, but was only able to obtain the slightly weaker bound ${H_m \leq C \exp( C m )}$ unconditionally.) In the converse direction, the prime tuples conjecture implies that ${H_m}$ should be comparable to ${m \log m}$. Granville has also obtained the slightly weaker explicit bound ${H_m \leq e^{8m+5}}$ for any ${m \geq 1}$ by a slight modification of Maynard’s argument. The arguments of Maynard avoid using the difficult partial results on (weakened forms of) the Elliott-Halberstam conjecture that were established by Zhang and then refined by Polymath8; instead, the main input is the classical Bombieri-Vinogradov theorem, combined with a sieve that is closer in spirit to an older sieve of Goldston and Yildirim, than to the sieve used later by Goldston, Pintz, and Yildirim on which almost all subsequent work is based. The aim of the Polymath8b project is to obtain improved bounds on ${H_1, H_2}$, and higher values of ${H_m}$, either conditional on the Elliott-Halberstam conjecture or unconditional. The likeliest routes for doing this are by optimising Maynard’s arguments and/or combining them with some of the results from the Polymath8a project. This post is intended to be the first research thread for that purpose. To start the ball rolling, I am going to give below a presentation of Maynard’s results, with some minor technical differences (most significantly, I am using the Goldston-Pintz-Yildirim variant of the Selberg sieve, rather than the traditional “elementary Selberg sieve” that is used by Maynard (and also in the Polymath8 project), although it seems that the numerology obtained by both sieves is essentially the same). An alternate exposition of Maynard’s work has just been completed also by Andrew Granville. It’s time to (somewhat belatedly) roll over the previous thread on writing the first paper from the Polymath8 project, as this thread is overflowing with comments.  We are getting near the end of writing this large (173 pages!) paper, establishing a bound of 4,680 on the gap between primes, with only a few sections left to thoroughly proofread (and the last section should probably be removed, with appropriate changes elsewhere, in view of the more recent progress by Maynard).  As before, one can access the working copy of the paper at this subdirectory, as well as the rest of the directory, and the plan is to submit the paper to Algebra and Number theory (and the arXiv) once there is consensus to do so.  Even before this paper was submitted, it already has had some impact; Andrew Granville’s exposition of the bounded gaps between primes story for the Bulletin of the AMS follows several of the Polymath8 arguments in deriving the result. After this paper is done, there is interest in continuing onwards with other Polymath8 – related topics, and perhaps it is time to start planning for them.  First of all, we have an invitation from  the Newsletter of the European Mathematical Society to discuss our experiences and impressions with the project.  I think it would be interesting to collect some impressions or thoughts (both positive and negative)  from people who were highly active in the research and/or writing aspects of the project, as well as from more casual participants who were following the progress more quietly.  This project seemed to attract a bit more attention than most other polymath projects (with the possible exception of the very first project, Polymath1).  I think there are several reasons for this; the project builds upon a recent breakthrough (Zhang’s paper) that attracted an impressive amount of attention and publicity; the objective is quite easy to describe, when compared against other mathematical research objectives; and one could summarise the current state of progress by a single natural number H, which implied by infinite descent that the project was guaranteed to terminate at some point, but also made it possible to set up a “scoreboard” that could be quickly and easily updated.  From the research side, another appealing feature of the project was that – in the early stages of the project, at least – it was quite easy to grab a new world record by means of making a small observation, which made it fit very well with the polymath spirit (in which the emphasis is on lots of small contributions by many people, rather than a few big contributions by a small number of people).  Indeed, when the project first arose spontaneously as a blog post of Scott Morrrison over at the Secret Blogging Seminar, I was initially hesitant to get involved, but soon found the “game” of shaving a few thousands or so off of $H$ to be rather fun and addictive, and with a much greater sense of instant gratification than traditional research projects, which often take months before a satisfactory conclusion is reached.  Anyway, I would welcome other thoughts or impressions on the projects in the comments below (I think that the pace of comments regarding proofreading of the paper has slowed down enough that this post can accommodate both types of comments comfortably.) Then of course there is the “Polymath 8b” project in which we build upon the recent breakthroughs of James Maynard, which have simplified the route to bounded gaps between primes considerably, bypassing the need for any Elliott-Halberstam type distribution results beyond the Bombieri-Vinogradov theorem.  James has kindly shown me an advance copy of the preprint, which should be available on the arXiv in a matter of days; it looks like he has made a modest improvement to the previously announced results, improving $k_0$ a bit to 105 (which then improves H to the nice round number of 600).  He also has a companion result on bounding gaps $p_{n+m}-p_n$ between non-consecutive primes for any $m$ (not just $m=1$), with a bound of the shape $H_m := \lim \inf_{n \to \infty} p_{n+m}-p_n \ll m^3 e^{4m}$, which is in fact the first time that the finiteness of this limit inferior has been demonstrated.  I plan to discuss these results (from a slightly different perspective than Maynard) in a subsequent blog post kicking off the Polymath8b project, once Maynard’s paper has been uploaded.  It should be possible to shave the value of $H = H_1$ down further (or to get better bounds for $H_m$ for larger $m$), both unconditionally and under assumptions such as the Elliott-Halberstam conjecture, either by performing more numerical or theoretical optimisation on the variational problem Maynard is faced with, and also by using the improved distributional estimates provided by our existing paper; again, I plan to discuss these issues in a subsequent post. ( James, by the way, has expressed interest in participating in this project, which should be very helpful.) The classical foundations of probability theory (discussed for instance in this previous blog post) is founded on the notion of a probability space ${(\Omega, {\cal E}, {\bf P})}$ – a space ${\Omega}$ (the sample space) equipped with a ${\sigma}$-algebra ${{\cal E}}$ (the event space), together with a countably additive probability measure ${{\bf P}: {\cal E} \rightarrow [0,1]}$ that assigns a real number in the interval ${[0,1]}$ to each event. One can generalise the concept of a probability space to a finitely additive probability space, in which the event space ${{\cal E}}$ is now only a Boolean algebra rather than a ${\sigma}$-algebra, and the measure ${\mu}$ is now only finitely additive instead of countably additive, thus ${{\bf P}( E \vee F ) = {\bf P}(E) + {\bf P}(F)}$ when ${E,F}$ are disjoint events. By giving up countable additivity, one loses a fair amount of measure and integration theory, and in particular the notion of the expectation of a random variable becomes problematic (unless the random variable takes only finitely many values). Nevertheless, one can still perform a fair amount of probability theory in this weaker setting. In this post I would like to describe a further weakening of probability theory, which I will call qualitative probability theory, in which one does not assign a precise numerical probability value ${{\bf P}(E)}$ to each event, but instead merely records whether this probability is zero, one, or something in between. Thus ${{\bf P}}$ is now a function from ${{\cal E}}$ to the set ${\{0, I, 1\}}$, where ${I}$ is a new symbol that replaces all the elements of the open interval ${(0,1)}$. In this setting, one can no longer compute quantitative expressions, such as the mean or variance of a random variable; but one can still talk about whether an event holds almost surely, with positive probability, or with zero probability, and there are still usable notions of independence. (I will refer to classical probability theory as quantitative probability theory, to distinguish it from its qualitative counterpart.) The main reason I want to introduce this weak notion of probability theory is that it becomes suited to talk about random variables living inside algebraic varieties, even if these varieties are defined over fields other than ${{\bf R}}$ or ${{\bf C}}$. In algebraic geometry one often talks about a “generic” element of a variety ${V}$ defined over a field ${k}$, which does not lie in any specified variety of lower dimension defined over ${k}$. Once ${V}$ has positive dimension, such generic elements do not exist as classical, deterministic ${k}$-points ${x}$ in ${V}$, since of course any such point lies in the ${0}$-dimensional subvariety ${\{x\}}$ of ${V}$. There are of course several established ways to deal with this problem. One way (which one might call the “Weil” approach to generic points) is to extend the field ${k}$ to a sufficiently transcendental extension ${\tilde k}$, in order to locate a sufficient number of generic points in ${V(\tilde k)}$. Another approach (which one might dub the “Zariski” approach to generic points) is to work scheme-theoretically, and interpret a generic point in ${V}$ as being associated to the zero ideal in the function ring of ${V}$. However I want to discuss a third perspective, in which one interprets a generic point not as a deterministic object, but rather as a random variable ${{\bf x}}$ taking values in ${V}$, but which lies in any given lower-dimensional subvariety of ${V}$ with probability zero. This interpretation is intuitive, but difficult to implement in classical probability theory (except perhaps when considering varieties over ${{\bf R}}$ or ${{\bf C}}$) due to the lack of a natural probability measure to place on algebraic varieties; however it works just fine in qualitative probability theory. In particular, the algebraic geometry notion of being “generically true” can now be interpreted probabilistically as an assertion that something is “almost surely true”. It turns out that just as qualitative random variables may be used to interpret the concept of a generic point, they can also be used to interpret the concept of a type in model theory; the type of a random variable ${x}$ is the set of all predicates ${\phi(x)}$ that are almost surely obeyed by ${x}$. In contrast, model theorists often adopt a Weil-type approach to types, in which one works with deterministic representatives of a type, which often do not occur in the original structure of interest, but only in a sufficiently saturated extension of that structure (this is the analogue of working in a sufficiently transcendental extension of the base field). However, it seems that (in some cases at least) one can equivalently view types in terms of (qualitative) random variables on the original structure, avoiding the need to extend that structure. (Instead, one reserves the right to extend the sample space of one’s probability theory whenever necessary, as part of the “probabilistic way of thinking” discussed in this previous blog post.) We illustrate this below the fold with two related theorems that I will interpret through the probabilistic lens: the “group chunk theorem” of Weil (and later developed by Hrushovski), and the “group configuration theorem” of Zilber (and again later developed by Hrushovski). For sake of concreteness we will only consider these theorems in the theory of algebraically closed fields, although the results are quite general and can be applied to many other theories studied in model theory. One of the basic tools in modern combinatorics is the probabilistic method, introduced by Erdos, in which a deterministic solution to a given problem is shown to exist by constructing a random candidate for a solution, and showing that this candidate solves all the requirements of the problem with positive probability. When the problem requires a real-valued statistic ${X}$ to be suitably large or suitably small, the following trivial observation is often employed: Proposition 1 (Comparison with mean) Let ${X}$ be a random real-valued variable, whose mean (or first moment) ${\mathop{\bf E} X}$ is finite. Then $\displaystyle X \leq \mathop{\bf E} X$ with positive probability, and $\displaystyle X \geq \mathop{\bf E} X$ with positive probability. This proposition is usually applied in conjunction with a computation of the first moment ${\mathop{\bf E} X}$, in which case this version of the probabilistic method becomes an instance of the first moment method. (For comparison with other moment methods, such as the second moment method, exponential moment method, and zeroth moment method, see Chapter 1 of my book with Van Vu. For a general discussion of the probabilistic method, see the book by Alon and Spencer of the same name.) As a typical example in random matrix theory, if one wanted to understand how small or how large the operator norm ${\|A\|_{op}}$ of a random matrix ${A}$ could be, one might first try to compute the expected operator norm ${\mathop{\bf E} \|A\|_{op}}$ and then apply Proposition 1; see this previous blog post for examples of this strategy (and related strategies, based on comparing ${\|A\|_{op}}$ with more tractable expressions such as the moments ${\hbox{tr} A^k}$). (In this blog post, all matrices are complex-valued.) Recently, in their proof of the Kadison-Singer conjecture (and also in their earlier paper on Ramanujan graphs), Marcus, Spielman, and Srivastava introduced an striking new variant of the first moment method, suited in particular for controlling the operator norm ${\|A\|_{op}}$ of a Hermitian positive semi-definite matrix ${A}$. Such matrices have non-negative real eigenvalues, and so ${\|A\|_{op}}$ in this case is just the largest eigenvalue ${\lambda_1(A)}$ of ${A}$. Traditionally, one tries to control the eigenvalues through averaged statistics such as moments ${\hbox{tr} A^k = \sum_i \lambda_i(A)^k}$ or Stieltjes transforms ${\hbox{tr} (A-z)^{-1} = \sum_i (\lambda_i(A)-z)^{-1}}$; again, see this previous blog post. Here we use ${z}$ as short-hand for ${zI_d}$, where ${I_d}$ is the ${d \times d}$ identity matrix. Marcus, Spielman, and Srivastava instead rely on the interpretation of the eigenvalues ${\lambda_i(A)}$ of ${A}$ as the roots of the characteristic polynomial ${p_A(z) := \hbox{det}(z-A)}$ of ${A}$, thus $\displaystyle \|A\|_{op} = \hbox{maxroot}( p_A ) \ \ \ \ \ (1)$ where ${\hbox{maxroot}(p)}$ is the largest real root of a non-zero polynomial ${p}$. (In our applications, we will only ever apply ${\hbox{maxroot}}$ to polynomials that have at least one real root, but for sake of completeness let us set ${\hbox{maxroot}(p)=-\infty}$ if ${p}$ has no real roots.) Prior to the work of Marcus, Spielman, and Srivastava, I think it is safe to say that the conventional wisdom in random matrix theory was that the representation (1) of the operator norm ${\|A\|_{op}}$ was not particularly useful, due to the highly non-linear nature of both the characteristic polynomial map ${A \mapsto p_A}$ and the maximum root map ${p \mapsto \hbox{maxroot}(p)}$. (Although, as pointed out to me by Adam Marcus, some related ideas have occurred in graph theory rather than random matrix theory, for instance in the theory of the matching polynomial of a graph.) For instance, a fact as basic as the triangle inequality ${\|A+B\|_{op} \leq \|A\|_{op} + \|B\|_{op}}$ is extremely difficult to establish through (1). Nevertheless, it turns out that for certain special types of random matrices ${A}$ (particularly those in which a typical instance ${A}$ of this ensemble has a simple relationship to “adjacent” matrices in this ensemble), the polynomials ${p_A}$ enjoy an extremely rich structure (in particular, they lie in families of real stable polynomials, and hence enjoy good combinatorial interlacing properties) that can be surprisingly useful. In particular, Marcus, Spielman, and Srivastava established the following nonlinear variant of Proposition 1: Proposition 2 (Comparison with mean) Let ${m,d \geq 1}$. Let ${A}$ be a random matrix, which is the sum ${A = \sum_{i=1}^m A_i}$ of independent Hermitian rank one ${d \times d}$ matrices ${A_i}$, each taking a finite number of values. Then $\displaystyle \hbox{maxroot}(p_A) \leq \hbox{maxroot}( \mathop{\bf E} p_A )$ with positive probability, and $\displaystyle \hbox{maxroot}(p_A) \geq \hbox{maxroot}( \mathop{\bf E} p_A )$ with positive probability. We prove this proposition below the fold. The hypothesis that each ${A_i}$ only takes finitely many values is technical and can likely be relaxed substantially, but we will not need to do so here. Despite the superficial similarity with Proposition 1, the proof of Proposition 2 is quite nonlinear; in particular, one needs the interlacing properties of real stable polynomials to proceed. Another key ingredient in the proof is the observation that while the determinant ${\hbox{det}(A)}$ of a matrix ${A}$ generally behaves in a nonlinar fashion on the underlying matrix ${A}$, it becomes (affine-)linear when one considers rank one perturbations, and so ${p_A}$ depends in an affine-multilinear fashion on the ${A_1,\ldots,A_m}$. More precisely, we have the following deterministic formula, also proven below the fold: Proposition 3 (Deterministic multilinearisation formula) Let ${A}$ be the sum of deterministic rank one ${d \times d}$ matrices ${A_1,\ldots,A_m}$. Then we have $\displaystyle p_A(z) = \mu[A_1,\ldots,A_m](z) \ \ \ \ \ (2)$ for all ${z \in C}$, where the mixed characteristic polynomial ${\mu[A_1,\ldots,A_m](z)}$ of any ${d \times d}$ matrices ${A_1,\ldots,A_m}$ (not necessarily rank one) is given by the formula $\displaystyle \mu[A_1,\ldots,A_m](z) \ \ \ \ \ (3)$ $\displaystyle = (\prod_{i=1}^m (1 - \frac{\partial}{\partial z_i})) \hbox{det}( z + \sum_{i=1}^m z_i A_i ) |_{z_1=\ldots=z_m=0}.$ Among other things, this formula gives a useful representation of the mean characteristic polynomial ${\mathop{\bf E} p_A}$: Corollary 4 (Random multilinearisation formula) Let ${A}$ be the sum of jointly independent rank one ${d \times d}$ matrices ${A_1,\ldots,A_m}$. Then we have $\displaystyle \mathop{\bf E} p_A(z) = \mu[ \mathop{\bf E} A_1, \ldots, \mathop{\bf E} A_m ](z) \ \ \ \ \ (4)$ for all ${z \in {\bf C}}$. Proof: For fixed ${z}$, the expression ${\hbox{det}( z + \sum_{i=1}^m z_i A_i )}$ is a polynomial combination of the ${z_i A_i}$, while the differential operator ${(\prod_{i=1}^m (1 - \frac{\partial}{\partial z_i}))}$ is a linear combination of differential operators ${\frac{\partial^j}{\partial z_{i_1} \ldots \partial z_{i_j}}}$ for ${1 \leq i_1 < \ldots < i_j \leq d}$. As a consequence, we may expand (3) as a linear combination of terms, each of which is a multilinear combination of ${A_{i_1},\ldots,A_{i_j}}$ for some ${1 \leq i_1 < \ldots < i_j \leq d}$. Taking expectations of both sides of (2) and using the joint independence of the ${A_i}$, we obtain the claim. $\Box$ In view of Proposition 2, we can now hope to control the operator norm ${\|A\|_{op}}$ of certain special types of random matrices ${A}$ (and specifically, the sum of independent Hermitian positive semi-definite rank one matrices) by first controlling the mean ${\mathop{\bf E} p_A}$ of the random characteristic polynomial ${p_A}$. Pursuing this philosophy, Marcus, Spielman, and Srivastava establish the following result, which they then use to prove the Kadison-Singer conjecture: Theorem 5 (Marcus-Spielman-Srivastava theorem) Let ${m,d \geq 1}$. Let ${v_1,\ldots,v_m \in {\bf C}^d}$ be jointly independent random vectors in ${{\bf C}^d}$, with each ${v_i}$ taking a finite number of values. Suppose that we have the normalisation $\displaystyle \mathop{\bf E} \sum_{i=1}^m v_i v_i^* = 1$ where we are using the convention that ${1}$ is the ${d \times d}$ identity matrix ${I_d}$ whenever necessary. Suppose also that we have the smallness condition $\displaystyle \mathop{\bf E} \|v_i\|^2 \leq \epsilon$ for some ${\epsilon>0}$ and all ${i=1,\ldots,m}$. Then one has $\displaystyle \| \sum_{i=1}^m v_i v_i^* \|_{op} \leq (1+\sqrt{\epsilon})^2 \ \ \ \ \ (5)$ with positive probability. Note that the upper bound in (5) must be at least ${1}$ (by taking ${v_i}$ to be deterministic) and also must be at least ${\epsilon}$ (by taking the ${v_i}$ to always have magnitude at least ${\sqrt{\epsilon}}$). Thus the bound in (5) is asymptotically tight both in the regime ${\epsilon\rightarrow 0}$ and in the regime ${\epsilon \rightarrow \infty}$; the latter regime will be particularly useful for applications to Kadison-Singer. It should also be noted that if one uses more traditional random matrix theory methods (based on tools such as Proposition 1, as well as more sophisticated variants of these tools, such as the concentration of measure results of Rudelson and Ahlswede-Winter), one obtains a bound of ${\| \sum_{i=1}^m v_i v_i^* \|_{op} \ll_\epsilon \log d}$ with high probability, which is insufficient for the application to the Kadison-Singer problem; see this article of Tropp. Thus, Theorem 5 obtains a sharper bound, at the cost of trading in “high probability” for “positive probability”. In the paper of Marcus, Spielman and Srivastava, Theorem 5 is used to deduce a conjecture ${KS_2}$ of Weaver, which was already known to imply the Kadison-Singer conjecture; actually, a slight modification of their argument gives the paving conjecture of Kadison and Singer, from which the original Kadison-Singer conjecture may be readily deduced. We give these implications below the fold. (See also this survey article for some background on the Kadison-Singer problem.) Let us now summarise how Theorem 5 is proven. In the spirit of semi-definite programming, we rephrase the above theorem in terms of the rank one Hermitian positive semi-definite matrices ${A_i := v_iv_i^*}$: Theorem 6 (Marcus-Spielman-Srivastava theorem again) Let ${A_1,\ldots,A_m}$ be jointly independent random rank one Hermitian positive semi-definite ${d \times d}$ matrices such that the sum ${A :=\sum_{i=1}^m A_i}$ has mean $\displaystyle \mathop{\bf E} A = I_d$ and such that $\displaystyle \mathop{\bf E} \hbox{tr} A_i \leq \epsilon$ for some ${\epsilon>0}$ and all ${i=1,\ldots,m}$. Then one has $\displaystyle \| A \|_{op} \leq (1+\sqrt{\epsilon})^2$ with positive probability. In view of (1) and Proposition 2, this theorem follows from the following control on the mean characteristic polynomial: Theorem 7 (Control of mean characteristic polynomial) Let ${A_1,\ldots,A_m}$ be jointly independent random rank one Hermitian positive semi-definite ${d \times d}$ matrices such that the sum ${A :=\sum_{i=1}^m A_i}$ has mean $\displaystyle \mathop{\bf E} A = 1$ and such that $\displaystyle \mathop{\bf E} \hbox{tr} A_i \leq \epsilon$ for some ${\epsilon>0}$ and all ${i=1,\ldots,m}$. Then one has $\displaystyle \hbox{maxroot}(\mathop{\bf E} p_A) \leq (1 +\sqrt{\epsilon})^2.$ This result is proven using the multilinearisation formula (Corollary 4) and some convexity properties of real stable polynomials; we give the proof below the fold. Thanks to Adam Marcus, Assaf Naor and Sorin Popa for many useful explanations on various aspects of the Kadison-Singer problem. I’ve just finished the first draft of my book “Expansion in finite simple groups of Lie type“, which is  based in the lecture notes for my graduate course on this topic that were previously posted on this blog.  It also contains some newer material, such as the notes on Lie algebras and Lie groups that I posted most recently here. Let ${F}$ be a field. A definable set over ${F}$ is a set of the form $\displaystyle \{ x \in F^n | \phi(x) \hbox{ is true} \} \ \ \ \ \ (1)$ where ${n}$ is a natural number, and ${\phi(x)}$ is a predicate involving the ring operations ${+,\times}$ of ${F}$, the equality symbol ${=}$, an arbitrary number of constants and free variables in ${F}$, the quantifiers ${\forall, \exists}$, boolean operators such as ${\vee,\wedge,\neg}$, and parentheses and colons, where the quantifiers are always understood to be over the field ${F}$. Thus, for instance, the set of quadratic residues $\displaystyle \{ x \in F | \exists y: x = y \times y \}$ is definable over ${F}$, and any algebraic variety over ${F}$ is also a definable set over ${F}$. Henceforth we will abbreviate “definable over ${F}$” simply as “definable”. If ${F}$ is a finite field, then every subset of ${F^n}$ is definable, since finite sets are automatically definable. However, we can obtain a more interesting notion in this case by restricting the complexity of a definable set. We say that ${E \subset F^n}$ is a definable set of complexity at most ${M}$ if ${n \leq M}$, and ${E}$ can be written in the form (1) for some predicate ${\phi}$ of length at most ${M}$ (where all operators, quantifiers, relations, variables, constants, and punctuation symbols are considered to have unit length). Thus, for instance, a hypersurface in ${n}$ dimensions of degree ${d}$ would be a definable set of complexity ${O_{n,d}(1)}$. We will then be interested in the regime where the complexity remains bounded, but the field size (or field characteristic) becomes large. In a recent paper, I established (in the large characteristic case) the following regularity lemma for dense definable graphs, which significantly strengthens the Szemerédi regularity lemma in this context, by eliminating “bad” pairs, giving a polynomially strong regularity, and also giving definability of the cells: Lemma 1 (Algebraic regularity lemma) Let ${F}$ be a finite field, let ${V,W}$ be definable non-empty sets of complexity at most ${M}$, and let ${E \subset V \times W}$ also be definable with complexity at most ${M}$. Assume that the characteristic of ${F}$ is sufficiently large depending on ${M}$. Then we may partition ${V = V_1 \cup \ldots \cup V_m}$ and ${W = W_1 \cup \ldots \cup W_n}$ with ${m,n = O_M(1)}$, with the following properties: • (Definability) Each of the ${V_1,\ldots,V_m,W_1,\ldots,W_n}$ are definable of complexity ${O_M(1)}$. • (Size) We have ${|V_i| \gg_M |V|}$ and ${|W_j| \gg_M |W|}$ for all ${i=1,\ldots,m}$ and ${j=1,\ldots,n}$. • (Regularity) We have $\displaystyle |E \cap (A \times B)| = d_{ij} |A| |B| + O_M( |F|^{-1/4} |V| |W| ) \ \ \ \ \ (2)$ for all ${i=1,\ldots,m}$, ${j=1,\ldots,n}$, ${A \subset V_i}$, and ${B\subset W_j}$, where ${d_{ij}}$ is a rational number in ${[0,1]}$ with numerator and denominator ${O_M(1)}$. My original proof of this lemma was quite complicated, based on an explicit calculation of the “square” $\displaystyle \mu(w,w') := \{ v \in V: (v,w), (v,w') \in E \}$ of ${E}$ using the Lang-Weil bound and some facts about the étale fundamental group. It was the reliance on the latter which was the main reason why the result was restricted to the large characteristic setting. (I then applied this lemma to classify expanding polynomials over finite fields of large characteristic, but I will not discuss these applications here; see this previous blog post for more discussion.) Recently, Anand Pillay and Sergei Starchenko (and independently, Udi Hrushovski) have observed that the theory of the étale fundamental group is not necessary in the argument, and the lemma can in fact be deduced from quite general model theoretic techniques, in particular using (a local version of) the concept of stability. One of the consequences of this new proof of the lemma is that the hypothesis of large characteristic can be omitted; the lemma is now known to be valid for arbitrary finite fields ${F}$ (although its content is trivial if the field is not sufficiently large depending on the complexity at most ${M}$). Inspired by this, I decided to see if I could find yet another proof of the algebraic regularity lemma, again avoiding the theory of the étale fundamental group. It turns out that the spectral proof of the Szemerédi regularity lemma (discussed in this previous blog post) adapts very nicely to this setting. The key fact needed about definable sets over finite fields is that their cardinality takes on an essentially discrete set of values. More precisely, we have the following fundamental result of Chatzidakis, van den Dries, and Macintyre: Proposition 2 Let ${F}$ be a finite field, and let ${M > 0}$. • (Discretised cardinality) If ${E}$ is a non-empty definable set of complexity at most ${M}$, then one has $\displaystyle |E| = c |F|^d + O_M( |F|^{d-1/2} ) \ \ \ \ \ (3)$ where ${d = O_M(1)}$ is a natural number, and ${c}$ is a positive rational number with numerator and denominator ${O_M(1)}$. In particular, we have ${|F|^d \ll_M |E| \ll_M |F|^d}$. • (Definable cardinality) Assume ${|F|}$ is sufficiently large depending on ${M}$. If ${V, W}$, and ${E \subset V \times W}$ are definable sets of complexity at most ${M}$, so that ${E_w := \{ v \in V: (v,w) \in W \}}$ can be viewed as a definable subset of ${V}$ that is definably parameterised by ${w \in W}$, then for each natural number ${d = O_M(1)}$ and each positive rational ${c}$ with numerator and denominator ${O_M(1)}$, the set $\displaystyle \{ w \in W: |E_w| = c |F|^d + O_M( |F|^{d-1/2} ) \} \ \ \ \ \ (4)$ is definable with complexity ${O_M(1)}$, where the implied constants in the asymptotic notation used to define (4) are the same as those that appearing in (3). (Informally: the “dimension” ${d}$ and “measure” ${c}$ of ${E_w}$ depends definably on ${w}$.) We will take this proposition as a black box; a proof can be obtained by combining the description of definable sets over pseudofinite fields (discussed in this previous post) with the Lang-Weil bound (discussed in this previous post). (The former fact is phrased using nonstandard analysis, but one can use standard compactness-and-contradiction arguments to convert such statements to statements in standard analysis, as discussed in this post.) The above proposition places severe restrictions on the cardinality of definable sets; for instance, it shows that one cannot have a definable set of complexity at most ${M}$ and cardinality ${|F|^{1/2}}$, if ${|F|}$ is sufficiently large depending on ${M}$. If ${E \subset V}$ are definable sets of complexity at most ${M}$, it shows that ${|E| = (c+ O_M(|F|^{-1/2})) |V|}$ for some rational ${0\leq c \leq 1}$ with numerator and denominator ${O_M(1)}$; furthermore, if ${c=0}$, we may improve this bound to ${|E| = O_M( |F|^{-1} |V|)}$. In particular, we obtain the following “self-improving” properties: • If ${E \subset V}$ are definable of complexity at most ${M}$ and ${|E| \leq \epsilon |V|}$ for some ${\epsilon>0}$, then (if ${\epsilon}$ is sufficiently small depending on ${M}$ and ${F}$ is sufficiently large depending on ${M}$) this forces ${|E| = O_M( |F|^{-1} |V| )}$. • If ${E \subset V}$ are definable of complexity at most ${M}$ and ${||E| - c |V|| \leq \epsilon |V|}$ for some ${\epsilon>0}$ and positive rational ${c}$, then (if ${\epsilon}$ is sufficiently small depending on ${M,c}$ and ${F}$ is sufficiently large depending on ${M,c}$) this forces ${|E| = c |V| + O_M( |F|^{-1/2} |V| )}$. It turns out that these self-improving properties can be applied to the coefficients of various matrices (basically powers of the adjacency matrix associated to ${E}$) that arise in the spectral proof of the regularity lemma to significantly improve the bounds in that lemma; we describe how this is done below the fold. We also make some connections to the stability-based proofs of Pillay-Starchenko and Hrushovski. I’ve just uploaded to the arXiv my article “Algebraic combinatorial geometry: the polynomial method in arithmetic combinatorics, incidence combinatorics, and number theory“, submitted to the new journal “EMS surveys in the mathematical sciences“.  This is the first draft of a survey article on the polynomial method – a technique in combinatorics and number theory for controlling a relevant set of points by comparing it with the zero set of a suitably chosen polynomial, and then using tools from algebraic geometry (e.g. Bezout’s theorem) on that zero set. As such, the method combines algebraic geometry with combinatorial geometry, and could be viewed as the philosophy of a combined field which I dub “algebraic combinatorial geometry”.   There is also an important extension of this method when one is working overthe reals, in which methods from algebraic topology (e.g. the ham sandwich theorem and its generalisation to polynomials), and not just algebraic geometry, come into play also. The polynomial method has been used independently many times in mathematics; for instance, it plays a key role in the proof of Baker’s theorem in transcendence theory, or Stepanov’s method in giving an elementary proof of the Riemann hypothesis for finite fields over curves; in combinatorics, the nullstellenatz of Alon is also another relatively early use of the polynomial method.  More recently, it underlies Dvir’s proof of the Kakeya conjecture over finite fields and Guth and Katz’s near-complete solution to the Erdos distance problem in the plane, and can be used to give a short proof of the Szemeredi-Trotter theorem.  One of the aims of this survey is to try to present all of these disparate applications of the polynomial method in a somewhat unified context; my hope is that there will eventually be a systematic foundation for algebraic combinatorial geometry which naturally contains all of these different instances the polynomial method (and also suggests new instances to explore); but the field is unfortunately not at that stage of maturity yet. This is something of a first draft, so comments and suggestions are even more welcome than usual.  (For instance, I have already had my attention drawn to some additional uses of the polynomial method in the literature that I was not previously aware of.)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 462, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9399732351303101, "perplexity": 291.0268352494186}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164043130/warc/CC-MAIN-20131204133403-00078-ip-10-33-133-15.ec2.internal.warc.gz"}
http://blog.applied.ai/a-practical-introduction-to-options-part-1/
In this series we cover options: a deceptively complex trading instrument that provide an entirely different type of insurance - against directional moves in financial markets. I have spent most of my professional life as a quantitative analyst or 'quant' - a person who performs statistical analysis on financial data, usually focused on providing a competitive advantage to a firm's trading strategies - specifically in derivatives or 'options' trading. Options are derivative contracts that allow the holder the right, but not the obligation, to trade (buy or sell) a specific security for a specific price for a specific period of time, and I'll explain much more in the following series. What I find fascinating about options trading is that it levels the playing field: allowing small financial firms to play extremely effective strategies despite having neither the technical advantage of recently controversial high-frequency trading (HFT) firms, nor the scale and diversified advantages enjoyed by well-known investment banks like Goldman Sachs and Morgan Stanley. By the time this series is finished, I aim to help people understand why this is possible, what is it about options that allows for such profitable trading? In the securities business, it is not uncommon for small-sized firms to excel at their primary business, and not because they engage in deceitful, fraudulent or exploitative practices. Generally, it is because they have identified an edge over the competition. This edge can come in different forms, and one such form is knowledge-based. They know something the rest of the market does not. Options trading allows this edge for a simple reason: options are complex and most people do not understand them. Hopefully, this article will help you to grasp the basics. "If you intelligently trade derivatives, it’s like a license to steal" - Charlie Munger, Berkshire Hathaway, 2014 I want to discuss practical issues with options, options trading, and the various trading infrastructures that are in place. A fascinating topic and rarely covered, despite its importance. For clarity, I will focus solely on vanilla, exchange-traded options, and will not discuss options with more esoteric features like path-dependent payoffs. We will get to price and other quantitative behaviour in future articles of this series, but before that, it is important to know a little about the infrastructure and details of the options markets themselves. While the underlying quantitative behaviour is fascinating, many of the seemingly practical matters can be hard to learn about. ## Notes I will gloss over some technical points for the purposes of clarity - although not too much. My aim is the Einsteinian principle of "as simple as possible, but not simpler". There are many sources on pricing methodology and other quantitative theory of options. This will not be one of them. I doubt I would do the topic justice but will provide some links at the end of the series for those interested. We will talk numbers and statistics in future articles. # What is an Option? Options are derivative contracts that allow the holder the right, but not the obligation, to trade (buy or sell) a specific security for a specific price for a specific period of time. The asset for which the option confers the right to trade is termed the underlying asset (or underlying for short). Any type of asset can be used as the underlying, and this article will focus on equity options (options on stocks such as Google, Apple, IBM, 3M and WalMart). It is the most common type of option, and that with which I am most familiar. Most of the principles discussed here have analogies with other underlying assets, such as currencies, futures or bonds1, so this focus does not lose much generality. Before we begin, it is best to first lay out some terminology and notation. Long & short • A long position is one where the instrument is 'owned' by the holder, who profits from a rise in the price of that instrument. • A short position is one where a profit can be made from a drop in the price of the instrument. The holder loans the instrument from someone else (and they owe it to them at some agreed point in the future) before immediately selling it in the marketplace. They make a profit if they can later buy the instrument (to return to the loan-maker) for less money than they sold it. These terms are used very generically in finance. Thus, a speculator will state she is 'long interest rates' - meaning they hold a combination of assets that will be profitable in the event of a rise in interest rates. While these terms often seem to be abused (for example, what does it mean to be 'short gamma in US equities')2, the key thing to remember is that long positions want a rise in price and short positions want a decline. Call & put • an option that confers the right to buy the underlying is termed a call option (call for short) • an option that confers the right to sell is termed a put option. Calls and puts are very closely linked in terms of behaviour and price, called put / call parity, and we will discuss this in a future article. Strike & expiration • The specified trade price for the option contract is termed the strike price, and the time period for which this right is conferred is the lifetime of the option. • The date at which this right ends is the expiration date or expiry date. Options are insurance policies against the movement of the underlying price. A call is a policy against the stock price going up, a put is a policy against the stock price going down, and the policy lasts until expiration. Exercise • Finally, most option contracts have a feature that is known as early exercise - that is, the right to buy or sell the stock can be exercised prior to expiration. • Most options traded have this feature and are termed American options. This has nothing to do with geography and is presumably some historical artefact. • Options that do not have early exercise are termed European options. The vast majority of options traded have early exercise rights (aka 'American' options). We will largely ignore 'European' options, although they are still very important from a modelling point of view as they are easier to price.3 The mechanics of trading equity options is very similar in principle to trading other common financial assets such as equities or futures. Options are traded on an exchange, and are treated as assets in your account. Two of the most important concerns in financial trading are counterparty risk and liquidity risk. Both are important concepts, and attempts to mitigate them explain the existence of a lot of infrastructure that has built up around the asset markets.4 ## Counterparty Risk Counterparty risk is the risk that the person you trade with (your counterparty) is not fit or willing to make good on the trade when due. The sudden reappearance of counterparty risk was the major contributing factor to the Credit Crisis of 2008. Huge losses in subprime mortgages threatened the existence of a number of large financial institutions. As a result, other institutions that had existing agreements with the distressed counterparties were now concerned about their ability to sustain current financial agreements. A good analogy is insurance companies. If your house burns down, you want to be sure that the company that wrote your policy is in business and can pay the compensation. Usually this is not a consideration, but if a lot of houses all burn down at once (say due to a huge forest fire) - this can become a huge problem. Very large hurricanes and natural disasters can bankrupt insurance companies, and large companies providing such catastrophe insurance try hard to diversify risks geographically. Similarly, if you buy a call option and the stock rockets up through the strike price and is now worth many, many multiples of what you paid for it, you want to ensure that the person from whom you bought it can deliver. ## Liquidity Risk Liquidity risk is the risk of the asset losing its liquidity. Liquidity is a commonly-used but nebulous term describing how difficult it is to find counterparties to trade an asset at a reasonable price. Liquid assets are easy to trade in large quantities, and such trades do not have a large effect on the price. As you might imagine from the lack of precision in the terms used in its definition, liquidity is difficult to quantify5. In broad terms, currencies tend to be extremely liquid, followed by equities and commodity futures. At the other end of the spectrum, real estate is highly illiquid, even in a booming property market6. Both of these issues are serious business risks, and were even more so in the early days of finance7. Such concerns led to the creation of exchanges and clearing. An exchange is a legal entity that serves as a central marketplace for traders of a particular asset type. It standardises contracts - especially important for options and futures - and centralises the liquidity in a particular venue. Trading on an exchange is a special privilege given only to members of the exchange, so members either trade for themselves or act as brokers for third parties. Most participants are customers of brokers, since becoming a member of an exchange is expensive and time consuming. As such, it is rarely worth becoming a member unless it is a primary focus of your business. Once a trade occurs between a buyer and seller, it is recorded on the exchange, with trade notifications sent to a number of interested parties including both primary participants in the trade, regulatory authorities, and market data providers. Most importantly, the trade is registered with the clearing and settlements system. The clearing system is how trades are settled, and helps mitigate against counterparty risk: once your trade is reported it is the responsibility of the clearing system to ensure participants receive / deliver their assets and cash. Once a trade is registered your counterparty is now the clearing system not the person or company on the other side of your trade. Thus, counterparty risk is much reduced.8 Settlement of trades usually happens a number of days after the date of trade, usually three days (for historical reasons), but attempts have begun to reduce this down to a $T+1$ system: cash and assets are transferred a day after the trade date. An interesting consequence of the old $T+3$ settlement system is the fact that US exchanges are never closed for more three days in a row: this ensured people could always liquidate assets to meet settlement obligations. This is why an unfortunate junior trader gets the job of watching the screens on Black Friday or during the Christmas holidays, despite nothing ever really happening. Someone needs to be there when the markets are open, just in case. Clearing fees are an additional cost to trading financial assets, but provide a valuable service to the system as a whole. As they are counterparties of last resort, they focus heavily on the risks taken by their clients, ensuring that losses incurred do not exceed the capital clients have on deposit with them. Should that occur, further losses are the responsibility of the clearing firm. # The Option Market Almost all financial markets are two-sided, open outcry markets. A two-sided market is one where there is a buy price (the bid) and a sell price (the ask or offer). The difference between the bid and the ask is known as the bid/ask spread, and is the price charged by market makers to always quote prices on both sides. The bid/ask spread is the most common way that traders make a profit; they try to take as little risk as possible and just earn the spread. Most market-makers want to carry no position overnight if possible, hedging out any residual positions they may have left at the end of the trading day. In an open outcry market, prices are constantly being updated and published. All the quotes published are aggregated and the highest bid and lowest ask across all the options exchanges for that contract is termed the National Best Bid and Offer or NBBO. Of course, any individual market maker may have a spread wider than that implied by the NBBO, and that is perfectly acceptable - simply that market maker will get fewer trades as other people are willing to pay more or take less and so are ahead in the queue. Another consequence of not matching the NBBO on both sides is that any trades you get will all be on one side; you will only get trades that involve you buying or selling only. Indeed this may be the point, a market maker may have taken down a big order earlier and is now looking to reduce her net risk by subsequently trading in the other direction. Watching a market in motion is fascinating, it is the aggregation of many different participants, each with different aims, priorities, and goals, expressed in the dynamics of four numbers: the bid and ask price, and the size of the quote on both sides, the volume of contracts / shares / currencies available at those prices. Option volumes are expressed in contracts. An option contract is for 100 shares, the same size as a round-lot of shares on stock exchanges. Despite this, contracts are quoted as if only 1 share of underlying is involved. I assume this is historical as that is how futures contracts are traded. It is also the most natural unit for pricing the option, and gives the exchange flexibility in terms of how contracts are standardised - contract sizes could be changed without requiring any change in how they quote the prices. Thus, if you buy 1 call for 1.25 USD, you will pay 125 USD, as an option contract is for 100 shares, but the price is quoted in terms of 1 share.9 Exchanges standardise the expiration date and strike prices for options. This makes things manageable, only a finite number of contracts are available for trade. • Until 2012, options expired on a monthly basis, and then weekly options for the large indexes were added. These additional expirations proved hugely popular, so weekly expirations were added for large single stock options in the last few years. There are now expirations every Friday in almost all liquid options. • Strike prices are also set by the exchanges, largely according to demand. Large equity index exchange-traded funds (ETFs) offer shares in funds that mirror the composition of the large indexes. ETFs are so liquid that there are strikes every 50c close to the stock price, despite underlying prices over 150 USD per share. The demand is there so the exchange provides those contracts. It is worth giving a concrete example of this. Consider the stock symbol SPY, the ETF based on the famous S&P-500 index of large US public companies. At the time of writing, this ETF is around 205 USD per share, and for the closest expiration date in a few days time, there are strikes every 50c from at least 190 to 220, i.e. the current price plus/minus 15 USD. For less liquid stocks, strikes are relatively further apart. For a lot of stocks in the range of 40-80 USD per share, strike may be still be 50c apart, possibly even 1 USD. # Conclusion We have discussed the trading environment and infrastructure involved in trading, as well as how the markets themselves are structured, focusing on options in particular. In the next article I will discuss the basic assumptions of option prices and the most common methodologies for pricing them, then discuss some of the consequences of those models. We will also discuss some price behaviour, and talk about effective ways for using options. Hopefully this will provide some insights to how focused firms make so much money trading them. 1. A futures contract (future) is a simple type of derivative that allows you to buy or sell an asset today and take delivery of the asset at a future point in time. Futures differ from options in that entering into a futures contract obligates you to trade and so it functions in many ways like stock. I will not really discuss futures much in this article but a lot of the idiosyncratic nature of options contracts seems related to the fact that the first options exchanges were offshoots of futures exchanges. Please let me know if I am wrong about this. 2. This term does make sense, and you will understand it by the end of this series. 3. American options will always be at least as valuable as the European equivalent as you can always decide to hold the option to expiration. Thus, it is sometimes useful to price an option as if it were European purely to obtain a lower bound on the price. 4. My inner cynic also insists that the consequent erection of competitive barriers to entry plays a non-trivial role too. 5. I've tried a few times and have never been wholly satisfied - it is a concept that tends to mean different things in different contexts, but you can usually determine some measure that is close to what you are after. 6. If this surprises you, think about the expense in time and fees involved in the buying or selling of a house or piece of commercial property. It is not something you can do in a few minutes or even days, and the price is always prone to uncertainty. In contrast, you can trade a few billion USD or EUR in the currency markets in seconds or minutes without much problem. 7. There are stories of traders on the New York Stock Exchange in the 1800s carrying revolvers with them when they went to settle trades with counterparties. Similarly, in the early days of poker-playing in US a lot of players were armed to ensure they left the card-rooms with their winnings. 8. Of course, like all risk mitigation strategies, this means there is now a massive systemic risk of the clearing system failing. However were that to occur, it is likely you are now looking for a shotgun, a 4x4, and a stock of canned food, rather than worrying about those call options you bought. 9. Back in the days of floor trading, order sizes of 10 contracts or less were often met with a derisive "would you like a lollipop with that?"
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19556556642055511, "perplexity": 1370.3804124174553}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647649.70/warc/CC-MAIN-20180321121805-20180321141805-00075.warc.gz"}
http://ieeexplore.ieee.org/xpl/articleDetails.jsp?reload=true&tp=&arnumber=6300976
This chapter contains sections titled: Introduction, Patient-Oriented Approaches, The Question of the Animal, Information Ethics, Summary
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8972833156585693, "perplexity": 19644.351034869946}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320201.43/warc/CC-MAIN-20170623220935-20170624000935-00054.warc.gz"}
http://pymuvr.readthedocs.io/en/latest/usage.html
# Usage¶ ## Examples¶ >>> import pymuvr >>> # define two sets of observations for two cells >>> observations_1 = [[[1.0, 2.3], # 1st observation, 1st cell ... [0.2, 2.5, 2.7]], # 2nd cell ... [[1.1, 1.2, 3.0], # 2nd observation ... []], ... [[5.0, 7.8], ... [4.2, 6.0]]] >>> observations_2 = [[[0.9], ... [0.7, 0.9, 3.3]], ... [[0.3, 1.5, 2.4], ... [2.5, 3.7]]] >>> # set parameters for the metric >>> cos = 0.1 >>> tau = 1.0 >>> # compute distances between all observations in set 1 >>> # and those in set 2 >>> pymuvr.dissimilarity_matrix(observations_1, ... observations_2, ... cos, ... tau, ... 'distance') array([[ 2.40281585, 1.92780957], [ 2.76008964, 2.31230263], [ 3.1322069 , 3.17216524]]) >>> # compute inner products >>> pymuvr.dissimilarity_matrix(observations_1, ... observations_2, ... cos, ... tau, ... 'inner product') array([[ 4.30817654, 5.97348384], [ 2.08532468, 3.85777053], [ 0.59639918, 1.10721323]]) >>> # compute all distances between observations in set 1 >>> pymuvr.square_dissimilarity_matrix(observations_1, ... cos, ... tau, ... 'distance') array([[ 0. , 2.6221159 , 3.38230952], [ 2.6221159 , 0. , 3.10221811], [ 3.38230952, 3.10221811, 0. ]]) >>> # compute inner products >>> pymuvr.square_dissimilarity_matrix(observations_1, ... cos, ... tau, ... 'inner product') array([[ 8.04054275, 3.3022304 , 0.62735459], [ 3.3022304 , 5.43940985, 0.23491838], [ 0.62735459, 0.23491838, 4.6541841 ]]) See the examples and test directories in the source distribution for more detailed examples of usage. These should also have been installed alongside the rest of the pymuvr files. The script examples/benchmark_versus_spykeutils.py compares the performance of pymuvr with the pure Python/NumPy implementation of the multiunit Van Rossum distance in spykeutils. ## Reference¶ pymuvr.dissimilarity_matrix(observations1, observations2, cos, tau, mode) Return the bipartite (rectangular) dissimilarity matrix between the observations in the first and the second list. Parameters: observations1,observations2 (list) – Two lists of multi-unit spike trains to compare. Each observations parameter must be a thrice-nested list of spike times, with observations[i][j][k] representing the time of the kth spike of the jth cell of the ith observation. cos (float) – mixing parameter controlling the interpolation between labelled-line mode (cos=0) and summed-population mode (cos=1). It corresponds to the cosine of the angle between the vectors used for the euclidean embedding of the multiunit spike trains. tau (float) – time scale for the exponential kernel, controlling the interpolation between pure coincidence detection (tau=0) and spike count mode (very large tau). Note that setting tau=0 is always allowed, but there is a range (0, epsilon) of forbidden values that tau is not allowed to assume. The upper bound of this range is proportional to the absolute value of the largest spike time in observations, with the proportionality constant being system-dependent. As a rule of thumb tau and the spike times should be within 4 orders of magnitude of each other; for example, if the largest spike time is 10s a value of tau>1ms will be expected. An exception will be raised if tau falls in the forbidden range. mode (string) – type of dissimilarity measure to be computed. Must be either ‘distance’ or ‘inner product’. A len(observations1) x len(observations2) numpy array containing the dissimilarity (distance or inner product) between each pair of observations that can be formed by taking one observation from observations1 and one from observations2. numpy.ndarray IndexError – if the observations in observations1 and observations2 don’t have all the same number of cells. OverflowError – if tau falls in the forbidden interval. pymuvr.square_dissimilarity_matrix(observations, cos, tau, mode) Return the all-to-all (square) dissimilarity matrix for the given list of observations. Parameters: observations (list) – A list of multi-unit spike trains to compare. cos (float) – mixing parameter controlling the interpolation between labelled-line mode (cos=0) and summed-population mode (cos=1). tau (float) – time scale for the exponential kernel, controlling the interpolation between pure coincidence detection (tau=0) and spike count mode (very large tau). mode (string) – type of dissimilarity measure to be computed. Must be either ‘distance’ or ‘inner product’. A len(observations) x len(observations) numpy array containing the dissimilarity (distance or inner product) between all possible pairs of observations. numpy.ndarray IndexError – if the observations in observations don’t have all the same number of cells. OverflowError – if tau falls in the forbidden interval. Effectively equivalent to dissimilarity_matrix(observations, observations, cos, tau), but optimised for speed. See pymuvr.dissimilarity_matrix() for details. pymuvr.distance_matrix(trains1, trains2, cos, tau) Return the bipartite (rectangular) distance matrix between the observations in the first and the second list. Convenience function; equivalent to dissimilarity_matrix(trains1, trains2, cos, tau, "distance"). Refer to pymuvr.dissimilarity_matrix() for full documentation. pymuvr.square_distance_matrix(trains, cos, tau) Return the all-to-all (square) distance matrix for the given list of observations. Convenience function; equivalent to square_dissimilarity_matrix(trains, cos, tau, "distance"). Refer to pymuvr.square_dissimilarity_matrix() for full documentation.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8322950005531311, "perplexity": 7653.185836249756}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607846.35/warc/CC-MAIN-20170524131951-20170524151951-00460.warc.gz"}
http://mathhelpforum.com/trigonometry/151364-funny-double-equation-problem.html
# Math Help - Funny double-equation problem 1. ## Funny double-equation problem 5-7sin θ€ =2cos^2 θ€ , θ€ [-360°, 450°] I'm just a little lost with the multiple use of Theda and €... Just thought of something... is θ€ simply another way of writing θ? The € sign has no input on what the problem means...? 2. Originally Posted by wiseguy 5-7sin θ€ =2cos^2 θ€ , θ€ [-360°, 450°] I'm just a little lost with the multiple use of Theda and €... Just thought of something... is θ€ simply another way of writing θ? The € sign has no input on what the problem means...? Solutions are required over the interval [-360 degrees, 450 degrees]. Substitute $\cos^2 \theta = 1 - \sin^2 \theta$ and re-arrange the resulting into a quadratic equation where $\sin \theta$ is the unknown. If you need more help, please show all your work and say where you get stuck. Also, please don't put questions in quote tags - it makes it too difficult to quote the question when replying. 3. Here's what I got... 5=2cos^2θ/7sinθ (7/2)5=cos^2θ/sinθ 17.5=cosθ*cotθ Can I carry this out like a normal equation? What I use to eliminate the cosx cotx mess? 4. I did an alternative approach to the problem, however I'm not sure if the 1 on the right side works 5=2cos^2θ/7sinθ (7/2)5=1-sin^2θ/sinθ 17.5=1-sinθ 5. Originally Posted by wiseguy I did an alternative approach to the problem, however I'm not sure if the 1 on the right side works 5=2cos^2θ/7sinθ (7/2)5=1-sin^2θ/sinθ 17.5=1-sinθ Substitute $w = \sin \theta$. Then, following from my earlier reply, you have: $5 - 7 w = 2(1 - w^2) \Rightarrow 2w^2 - 7w + 3 =0$. Solve for w. One solution is rejected (why?). The other solution leads to $\sin \theta = \frac{1}{2}$. Solve this equation. 6. Okay, I think I got it: x=1/2, x=3 sinθ=1/2, sinθ=3 arcsin(1/2)=0.523599, arcsin3=no solution so there is only one solution, and it is θ=0.523599 ...? Thank you 7. Originally Posted by wiseguy Okay, I think I got it: x=1/2, x=3 sinθ=1/2, sinθ=3 arcsin(1/2)=0.523599, arcsin3=no solution so there is only one solution, and it is θ=0.523599 ...? Thank you 1) θ needs to be in degrees, as stated in the original problem, so θ = 30°. 2) In the future, you should express radian measures as something times pi if you can. IOW it's better to say θ = π/6 instead of 0.523599.... sinθ = 1/2 -> θ = π/6 or 30° is one of those things you should really memorize. 8. Originally Posted by wiseguy Okay, I think I got it: x=1/2, x=3 sinθ=1/2, sinθ=3 arcsin(1/2)=0.523599, arcsin3=no solution so there is only one solution, and it is θ=0.523599 ...? Thank you Your answer is supposed to be in degrees since the question uses degree. arcsin (0.5) = 30 degree and this is one of the solutions in the range given. There are more: 150, 390, -330 , -210 9. Okay, how would I tie the thing where sin is positive in the first and second quadrant to the four solutions of 150, 390, -330 , -210? 10. Originally Posted by wiseguy Okay, how would I tie the thing where sin is positive in the first and second quadrant to the four solutions of 150, 390, -330 , -210? The reference angle is 30(1st quadrant) , 150(2nd quadrant). Note also that the period of a sin graph is 360. In other words, it repeats itself every 360. so 30+360=390 How about 150+360=510? Look at the range. Now you go clockwise direction, where sin is now positive in the 3rd and 4th quadrant. In the 3rd quadrant, -(180+30) and the 4th: -(360-30) As an alternative, you can use the general formula for sine. 11. Got it! Now I have to remember this stuff... lol
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.945685625076294, "perplexity": 1842.5705105190898}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257826908.63/warc/CC-MAIN-20160723071026-00206-ip-10-185-27-174.ec2.internal.warc.gz"}
https://en.wikipedia.org/wiki/Opening_%28morphology%29
# Opening (morphology) The opening of the dark-blue square by a disk, resulting in the light-blue square with round corners. In mathematical morphology, opening is the dilation of the erosion of a set A by a structuring element B: ${\displaystyle A\circ B=(A\ominus B)\oplus B,\,}$ where ${\displaystyle \ominus }$ and ${\displaystyle \oplus }$ denote erosion and dilation, respectively. Together with closing, the opening serves in computer vision and image processing as a basic workhorse of morphological noise removal. Opening removes small objects from the foreground (usually taken as the bright pixels) of an image, placing them in the background, while closing removes small holes in the foreground, changing small islands of background into foreground. These techniques can also be used to find specific shapes in an image. Opening can be used to find things into which a specific structuring element can fit (edges, corners, ...). One can think of B sweeping around the inside of the boundary of A, so that it does not extend beyond the boundary, and shaping the A boundary around the boundary of the element. ## Properties • Opening is idempotent, that is, ${\displaystyle (A\circ B)\circ B=A\circ B}$. • Opening is increasing, that is, if ${\displaystyle A\subseteq C}$, then ${\displaystyle A\circ B\subseteq C\circ B}$. • Opening is anti-extensive, i.e., ${\displaystyle A\circ B\subseteq A}$. • Opening is translation invariant. • Opening and closing satisfy the duality ${\displaystyle A\bullet B=(A^{c}\circ B^{c})^{c}}$, where ${\displaystyle \bullet }$ denotes closing. ## Extension: Opening by reconstruction In morphological opening ${\displaystyle (A\ominus B)\oplus B}$ , the erosion operation removes objects that are smaller than structuring element B and the dilation operation (approximately) restores the size and shape of the remaining objects. However, restoration accuracy in the dilation operation depends highly on the type of structuring element and the shape of the restoring objects. The opening by reconstruction method is able to restore the objects more completely after erosion has been applied. It is defined as the reconstruction by geodesic dilation of ${\displaystyle n}$ erosions of ${\displaystyle F}$ by ${\displaystyle B}$ with respect to ${\displaystyle F}$ : ${\displaystyle O_{R}^{(n)}(F)=R_{F}^{D}[(F\ominus nB)],}$[1] where ${\displaystyle (F\ominus nB)}$ denotes a marker image and ${\displaystyle F}$ is a mask image in morphological reconstruction by dilation. ${\displaystyle R_{F}^{D}[(F\ominus nB)]=D_{F}^{(k)}[(F\ominus nB)],}$[1] ${\displaystyle D}$ denotes geodesic dilation with ${\displaystyle k}$ iterations until stability, i.e., such that ${\displaystyle D_{F}^{(k)}[(F\ominus nB)]=D_{F}^{(k-1)}[(F\ominus nB)].}$[1] Since ${\displaystyle D_{F}^{(1)}[(F\ominus nB)]=([(F\ominus nB)]\oplus B)\cap F}$,[1] the marker image is limited in the growth region by the mask image, so the dilation operation on the marker image will not expand beyond the mask image. As a result, the marker image is a subset of the mask image ${\displaystyle (F\ominus nB)\subseteq F.}$[1] (Strictly, this holds for binary masks only. However, similar statements hold when the mask is not binary.) The images below present a simple opening-by-reconstruction example which extracts the vertical strokes from an input text image. Since the original image is converted from grayscale to binary image, it has a few distortions in some characters so that same characters might have different vertical lengths. In this case, the structuring element is an 8-pixel vertical line which is applied in the erosion operation in order to find objects of interest. Moreover, morphological reconstruction by dilation, ${\displaystyle R_{F}^{D}[(F\ominus nB)]=D_{F}^{(k)}[(F\ominus nB)]}$[1] iterates ${\displaystyle k=9}$ times until the resulting image converges. Original image for opening by reconstruction Marker image Result of opening by reconstruction
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 25, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7585924863815308, "perplexity": 1343.1349787763897}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499966.43/warc/CC-MAIN-20230209112510-20230209142510-00708.warc.gz"}
https://www.physicsforums.com/threads/newtons-cooling-law.239803/
# Newton's cooling law • Start date • #1 Ry122 565 2 For Newton's cooling law $$q = h*a \Delta T$$ q is the rate of energy loss of a body but for what unit time? For example if q = 3 does the body lose 3 watts of energy in 1 second? • #2 Gold Member 3,380 895 Whatever units you want as long as you are consistent (i.e mixing imperial and SI is a bad idea). So yes, assuming you are using SI for the constant and the variables the time will be in seconds. • #3 armis 103 0 The differential form is more general $$\partial{Q}/\partial{t} = -k{\oint}\nabla{T}\vec{dS}$$ $$\partial{Q}/\partial{t}$$ is the amount of heat transferred per time unit as long as you are using SI. [W] or [J*s^-1]. So it's J that are transferred in one second not W And you have a minus missing I may be wrong, feel free to correct me Last edited: • #4 ironhill 10 0 Newton's law of cooling: If you put milk in your coffee then leave it for a minute it will be warmer than if you leave it for a minute then add milk. • #5 armis 103 0 That's an efficient way of applying the Newton's law of cooling :) • Last Post Replies 18 Views 405 • Last Post Replies 20 Views 563 • Last Post Replies 58 Views 2K • Last Post Replies 27 Views 577 • Last Post Replies 3 Views 1K • Last Post Replies 4 Views 1K • Last Post Replies 1 Views 296 • Last Post Replies 7 Views 516 • Last Post Replies 9 Views 695 • Last Post Replies 35 Views 1K
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8166705369949341, "perplexity": 1985.8333381075547}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499646.23/warc/CC-MAIN-20230128153513-20230128183513-00463.warc.gz"}
http://mxnet.incubator.apache.org/versions/1.6/api/r/docs/api/mx.nd.ravel.multi.index.html
# mx.nd.ravel.multi.index¶ ## Description¶ Converts a batch of index arrays into an array of flat indices. The operator follows numpy conventions so a single multi index is given by a column of the input matrix. The leading dimension may be left unspecified by using -1 as placeholder. Example: A = [[3,6,6],[4,5,1]] ravel(A, shape=(7,6)) = [22,41,37] ravel(A, shape=(-1,6)) = [22,41,37] ## Arguments¶ Argument Description data NDArray-or-Symbol. Batch of multi-indices shape Shape(tuple), optional, default=None. Shape of the array into which the multi-indices apply. ## Value¶ out The result mx.ndarray
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41488948464393616, "perplexity": 6528.792273238916}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400228998.45/warc/CC-MAIN-20200925213517-20200926003517-00079.warc.gz"}
https://socratic.org/questions/i-need-help-with-subscripts-for-empirical-formula-how-do-i-know-which-number-to-
Chemistry Topics # I need help with Subscripts for empirical formula, how do i know which number to multiply so that I get a whole number? Mar 10, 2015 When you have to calculate a compound's empirical formula from its percent composition, there are a few tricks to use to help you deal with decimal mole ratios between the atoms that comprise your compound. Now, I assume you know how to get to this point, so I won't show you the whole approach. Let's assume you have a compound containing $\text{A}$, $\text{B}$ ,and $\text{C}$, and you determine the mole ratios between these elements to be $\text{A} : 2.33$ $\text{B} : 1$ $\text{C} : 1.67$ In such cases it is very useful to use mixed fractions. Mixed fractions are a combination of a whole number and a regular (or proper) fraction. In this case, $2.33$ is equal to 2 and 1/3, or 7/3, and $1.67$ is equal to 1 and 2/3, or 5/3. This makes the ratios equal to $\text{A": "7/3}$ $\text{B} : 1$ $\text{C": "5/3}$ Now multiply all of them by 3 to get rid of the denominator and you'll get the empirical formula ${A}_{7} {B}_{3} {C}_{5}$ If you get enough practice with empirical formulas you'll be able to "see" the answer faster. For example, if you have a compound comporised of $\text{X}$, $\text{Y}$, and $\text{Z}$, and the mole ratio looks like this $\text{X} : 1.33$ $\text{Y} : 1$ $\text{Z} : 1$ It will become obvious in time that you have to multiply all of them by 3 to get all-whole numbers and an empirical formula of ${X}_{4} {Y}_{3} {Z}_{3}$ Notice that the mixed fractions method is useful in this case as well, since 1.33 is actually 1 and 1/3, or 4/3. As a conclusion, it takes a little practice to be able to determine which numbers can be written in a useful way as mixed fractions, so spend some time on getting this skill down. SIDE NOTE I assume you know how to get around mixed fractions, so I won't detail how I got 7/3 or 4/3. Mar 10, 2015 After you divide by the smallest amount of moles if you end up with a number ending in .25 then multiply all numbers by 4. If you end up with a number ending in .33 then multiply all numbers 3. If you end up with a number ending in .20 then multiply all numbers by 5. If you end up with a number ending in .5 then multiply all numbers by 2. ##### Impact of this question 5924 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 19, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9347317814826965, "perplexity": 284.9141213819734}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313987.32/warc/CC-MAIN-20190818165510-20190818191510-00163.warc.gz"}
https://eres.architexturez.net/doc/oai-eres-id-eres2005-201
This paper examines the distributional characteristics of REITs using the daily NAREIT indices for the period 1997-2004. While previous studies have examined the distributional properties of REITs, they have largely used lower frequency monthly data. This paper has two primary aims. Firstly, it extends the existing literature on REITs by utilising the approaches proposed by Peiro (1999, 2002) and illustrating that the conventional skewness statistic, which is normally used to test for normality in return distributions, may provide erroneous inferences regarding the distribution as it is based on the normal distribution. We test for non-normality using a variety of alternative tests that make minimal assumptions about the shape of the underlying distribution. Secondly, building on the reported findings we analyse the implications for risk measurement. We estimate value-at-risk measures on a daily basis for REITs. While VaR has over the last ten years become a main standard risk measure it does suffer from a number of problems, especially concerning the assumptions made regarding normality in the basic estimation of the measure (Hull & White, 1998). We therefore use of Extreme Value Theory in examining the tail behaviour of REITs and integrate this with the estimation of daily value-at-risk figures.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8708873987197876, "perplexity": 933.2849356584265}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000231.40/warc/CC-MAIN-20190626073946-20190626095946-00466.warc.gz"}
https://med.libretexts.org/Bookshelves/Anatomy_and_Physiology/Book%3A_Anatomy_and_Physiology_(Boundless)/10%3A_Overview_of_the_Nervous_System/10.5%3A_Neurophysiology/10.5D%3A_Resting_Membrane_Potentials
# 10.5D: Resting Membrane Potentials LEARNING OBJECTIVES • Describe the resting membrane potential of cells The potential difference in a resting neuron is called the resting membrane potential. This causes the membrane to be polarized. The value of the resting membrane potential varies from −40mV to −90mV in a different types of neurons. The resting membrane potential exists only across the membrane. Most of the time, the difference in ionic composition of the intracellular and extracellular fluids and difference in ion permeability generates the resting membrane potential difference. The interactions that generate the resting potential are modeled by the Goldman equation. It is based on the charges of the ions in question, as well as the difference between their inside and outside concentrations and the relative permeability of the plasma membrane to each ion where: $Em=RTFln(PK[K+]out+PNa[NA+]out+PCl[Cl−]inPK[K+]in+PNa[Na+]in+PCl[Cl−]out)$ Goldman equation: R is the universal gas constant, equal to 8.314 joules·K−1·mol−1 T is the absolute temperature, measured in kelvins (= K = degrees Celsius + 273.15) F is the Faraday constant, equal to 96,485 coulombs·mol−1 or J·V−1·mol−1 The three ions that appear in this equation are potassium (K+), sodium (Na+), and chloride (Cl). The Goldman formula essentially expresses the membrane potential as an average of the reversal potentials for the individual ion types, weighted by permeability. In most animal cells, the permeability to potassium is much higher in the resting state than the permeability to sodium. Consequently, the resting potential is usually close to the potassium reversal potential. ### Key Points • The potential difference in a resting neuron is called the resting membrane potential. • The value of the resting membrane potential varies from -40mV to -90mV in a different types of neurons. • Most of the time, the difference in ionic composition of the intracellular and extracellular fluids and difference in ion permeability generates the resting membrane potential difference. • The Goldman formula essentially expresses the membrane potential as an average of the reversal potentials for the individual ion types weighted by permeability ### Key Terms • resting membrane potential: The potential difference in a resting neuron that causes its membrane to be polarized. • Goldman equation: Models the interactions that generate resting membrane potential.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7257236838340759, "perplexity": 1701.7578004284796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201904.55/warc/CC-MAIN-20190319052517-20190319074517-00542.warc.gz"}
http://clay6.com/qa/13686/if-a-chord-of-the-parabola-y-2-4x-passes-through-its-focus-and-makes-an-ang
# If a chord of the parabola $y^2=4x$ passes through its focus and makes an angle $\theta$ with the $X-axis,$ then its length is $\begin {array} {1 1} (a)\;4 \cos ^2 \theta & \quad (b)\;4\; \sin ^2 \theta \\ (c)\;4\; cosec ^2 \theta & \quad (d)\;4 \sec^2 \theta \end {array}$ $(c)\;4\; cosec ^2 \theta$ answered Nov 7, 2013 by
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2857291102409363, "perplexity": 139.99901368680494}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102967.65/warc/CC-MAIN-20170817053725-20170817073725-00240.warc.gz"}
https://mathematica.stackexchange.com/questions/15714/whats-the-best-practice-for-nested-local-constants?noredirect=1
# What's the best practice for nested local constants? [duplicate] Possible Duplicate: How to avoid nested With[]? I have many situations where I have a constant that is local to a function, with other constants that are computed from it. I see that I can implement this using nested With statements, like leapSeconds = With[{ usnoData = Import["http://maia.usno.navy.mil/ser7/tai-utc.dat", "Text", CharacterEncoding -> "UTF8"]}, With[{ lineLength = StringPosition[usnoData, "\n"][[1, 1]], lsCount = Length[StringCases[usnoData, "\n"]] + 1}, DateList[#] & /@ (StringTake[ usnoData, {2 + #*lineLength, 9 + #*lineLength}] & /@ Range[0, lsCount - 1])]] but this seems cumbersome. Is there a better way to do this sort of thing (I know that for this example I could dispense with the constants, but assume for the exercise that I do indeed need the constants)? Is nesting With statments a reasonable approach, or something I've just come up with by not understanding Mathematica very well? ## marked as duplicate by Leonid Shifrin, Ajasja, whuber, Sjoerd C. de Vries, tkottDec 5 '12 at 16:27 • Then I don't see what the question is. My answer is this: yes, this situation is very common in practice, and then I use LetL, described in my answer to that question. The only other option is to separate every single constant's computation into a separate function, and then chain those functions, but often this may not be desirable, both because of extra boilerplate of parameter-passing, and because those new functions may not be general enough to justify their existence. – Leonid Shifrin Dec 4 '12 at 19:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39542776346206665, "perplexity": 1125.0754567390188}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540481076.11/warc/CC-MAIN-20191205141605-20191205165605-00124.warc.gz"}
https://latex.org/forum/viewtopic.php?f=45&t=25475
LaTeX forum ⇒ Graphics, Figures & Tables ⇒ Figure in mac Information and discussion about graphics, figures & tables in LaTeX documents. patriciakh Posts: 1 Joined: Fri Jan 02, 2015 5:04 pm Figure in mac Hi I am new to Latex and am using Mac which has doubled my problem..... I am trying to insert a figure in my thesis but each time it says that the figure is not find. This is while I have defined a folder for my figures. Can anyone help me Tags: Stefan Kottwitz Posts: 9595 Joined: Mon Mar 10, 2008 9:44 pm Welcome to the forum! Which command are you using for including? Did you specify file name and path? Perhaps show us a code line and tell where exactly the image is located, relative or absolute path. You could also add the .log file here, as attachment to a forum post. Stefan coachbennett1981 Posts: 238 Joined: Fri Feb 05, 2010 10:15 pm Two things: Need to make sure that you file extension is excepted. Also, if you are using the \includegraphics command, you need to make sure the file you want to include is in the sam place (folder) as your thesis. Nick Maks71 Posts: 6 Joined: Wed Aug 02, 2017 12:02 pm Hi, I also just started using latex (today :<). Love the template but My figures folder cannot be located. Any ideas, I can see it and the graphicspath is there Thanks maks Johannes_B Site Moderator Posts: 4173 Joined: Thu Nov 01, 2012 4:08 pm Which template? The smart way: Calm down and take a deep breath, read posts and provided links attentively, try to understand and ask if necessary. Stefan Kottwitz Posts: 9595 Joined: Mon Mar 10, 2008 9:44 pm Hi Maks, welcome to the forum! In addition to the information, which template you are using, perhaps also post error messages or warnings here. You could post attach the .log file here, as attachment. The link "Attachments" is below the text edit field, when writing a post. Take care that you don't write in draft mode (document class or graphicx package option `draft`), that would not load the images for compiling speed. Stefan
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8129581809043884, "perplexity": 5405.479738060916}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738950.61/warc/CC-MAIN-20200813014639-20200813044639-00368.warc.gz"}
http://papers.nips.cc/paper/5909-learning-causal-graphs-with-small-interventions
# NIPS Proceedingsβ ## Learning Causal Graphs with Small Interventions A note about reviews: "heavy" review comments were provided by reviewers in the program committee as part of the evaluation process for NIPS 2015, along with posted responses during the author feedback period. Numerical scores from both "heavy" and "light" reviewers are not provided in the review link below. [PDF] [BibTeX] [Supplemental] [Reviews] ### Abstract We consider the problem of learning causal networks with interventions, when each intervention is limited in size under Pearl's Structural Equation Model with independent errors (SEM-IE). The objective is to minimize the number of experiments to discover the causal directions of all the edges in a causal graph. Previous work has focused on the use of separating systems for complete graphs for this task. We prove that any deterministic adaptive algorithm needs to be a separating system in order to learn complete graphs in the worst case. In addition, we present a novel separating system construction, whose size is close to optimal and is arguably simpler than previous work in combinatorics. We also develop a novel information theoretic lower bound on the number of interventions that applies in full generality, including for randomized adaptive learning algorithms. For general chordal graphs, we derive worst case lower bounds on the number of interventions. Building on observations about induced trees, we give a new deterministic adaptive algorithm to learn directions on any chordal skeleton completely. In the worst case, our achievable scheme is an $\alpha$-approximation algorithm where $\alpha$ is the independence number of the graph. We also show that there exist graph classes for which the sufficient number of experiments is close to the lower bound. In the other extreme, there are graph classes for which the required number of experiments is multiplicatively $\alpha$ away from our lower bound. In simulations, our algorithm almost always performs very close to the lower bound, while the approach based on separating systems for complete graphs is significantly worse for random chordal graphs.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5833562612533569, "perplexity": 299.9632192701441}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201455.20/warc/CC-MAIN-20190318152343-20190318174343-00408.warc.gz"}
https://hypothes.is/search?q=tag%3Agroups
73 Matching Annotations 1. Aug 2020 2. www.cdc.gov www.cdc.gov 1. Killerby. M. E., (2020) Characteristics Associated with Hospitalization Among Patients with COVID-19 — Metropolitan Atlanta, Georgia, March–April 2020. Centers for Disease Control and Prevention. Retrieved from: https://www.cdc.gov/mmwr/volumes/69/wr/mm6925e1.htm #### URL 3. www.idecorp.com www.idecorp.com 1. socially-distanced in-school students andat-home students can join Use the tools that are available to make the in-person material as accessible as possible to the at home students. • iPad as a document camera • AirServer to the board, share screen to Meet Repeat the sessions on A/B days? One day per week for these sessions? 2. students may connect and workwith others at home or in schoolvia videoconferencing. Could students use breakout Google Meet rooms during their off day to work together? Teachers could facilitate which ones are open at which times for students or rotate into those as the groups (or individuals) in person are working. #### URL 4. covid-19.iza.org covid-19.iza.org 1. Papageorge. N. W., Zahn. M. V. Belot. M., van den Broek-Altenburg. E., Choi. S., Jamison. J. C., (2020). Socio-​Demographic Factors Associated with Self-​Protecting Behavior during the COVID-19 Pandemic. Institute of Labor Economics. Retrieved from: https://covid-19.iza.org/publications/dp13333/ #### URL 5. covid-19.iza.org covid-19.iza.org 1. Von Gaudecker. H. M., Holler. R., Janys. L., Siflinger. B., Zimpelmann. C. (2020). Labour Supply in the Early Stages of the COVID-19 Pandemic: Empirical Evidence on Hours, Home Office, and Expectations. Institute of labor economics. Retrieved from: https://covid-19.iza.org/publications/dp13158/ #### URL 6. www.theguardian.com www.theguardian.com 1. Groups are great for brief bursts of humour or frustration, but, by their very nature, far less useful for supporting the circulation of public information. To understand why this is the case, we have to think about the way in which individuals can become swayed and influenced once they belong to a group. #### URL 7. www.nature.com www.nature.com 1. Woolston. C., (2020) ‘It’s like we’re going back 30 years’: how the coronavirus is gutting diversity in science. Nature. Retrieved from: https://www.nature.com/articles/d41586-020-02288-3?utm_source=twt_nnc&utm_medium=social&utm_campaign=naturenews&sf236423828=1 #### URL 8. Jul 2020 9. www.thelancet.com www.thelancet.com 1. Cook, Marion. ‘Potential Factors Linked to High COVID-19 Death Rates in British Minority Ethnic Groups’. The Lancet Infectious Diseases 0, no. 0 (17 July 2020). https://doi.org/10.1016/S1473-3099(20)30583-1. #### URL 10. Jun 2020 11. www.researchgate.net www.researchgate.net 1. Li, J., Hallsworth. A.G. and Coca-Stefaniak, J.A. (2020), “The changing grocery shopping behavior of Chineseconsumers at the outset of the COVID-19 outbreak”, Tijdschrift voor Economische en Sociale Geografie. https://doi.org/10.1111/tesg.12420 #### URL 12. techcrunch.com techcrunch.com 1. Look at what it looks like when you’re creating the internet in a society that values the group over the individual. #### URL 1. The Rails team has decided to migrate all the talk, docs and core Google groups to  https://discuss.rubyonrails.org/. #### URL 14. May 2020 15. onlinelibrary.wiley.com onlinelibrary.wiley.com 1. Banerjee, D. (2020). The Impact of Covid‐19 Pandemic on Elderly Mental Health. International Journal of Geriatric Psychiatry, gps.5320. https://doi.org/10.1002/gps.5320 #### URL 16. www.thelancet.com www.thelancet.com 1. The Lancet Public Health, May 2020, Volume 5, Issue 5, Pages e235-e296. https://www.thelancet.com/journals/lanpub/issue/current #### URL 17. www.thelancet.com www.thelancet.com 1. Jordan, R. E., & Adab, P. (2020). Who is most likely to be infected with SARS-CoV-2? The Lancet Infectious Diseases, S1473309920303959. https://doi.org/10.1016/S1473-3099(20)30395-9 #### URL 18. jamanetwork.com jamanetwork.com 1. Baggett, T. P., Keyes, H., Sporn, N., & Gaeta, J. M. (2020). Prevalence of SARS-CoV-2 Infection in Residents of a Large Homeless Shelter in Boston. JAMA. https://doi.org/10.1001/jama.2020.6887 #### URL 19. www.pandemicpolitics.net www.pandemicpolitics.net 1. PandemicPolitics. Pandemic politics: Political attitudes and crisis communication. https://www.pandemicpolitics.net #### URL 20. psyarxiv.com psyarxiv.com 1. Youngstrom, E. A., Ph.D., Hinshaw, S. P., Stefana, A., Chen, J., Michael, K., Van Meter, A., … Vieta, E. (2020, April 20). Working with Bipolar Disorder During the COVID-19 Pandemic: Both Crisis and Opportunity. https://doi.org/10.31234/osf.io/wg4bj #### URL 21. Apr 2020 22. web.hypothes.is web.hypothes.is 1. scoped to a particular domain. Climate Feedback group (see here and here) seems to be one of these Restricted Publisher Groups. However, it doesn't seem to be "scoped to a particular domain" (see for example here, here, or here). Is this a third configuration of Publisher Groups? Or a different kind of groups altogether? Or have these domains been enabled one by one to the Publisher Group scope? Is this behaviour explained somewhere? #### URL 23. www.thelancet.com www.thelancet.com 1. Holmes, E. A., O’Connor, R. C., Perry, V. H., Tracey, I., Wessely, S., Arseneault, L., Ballard, C., Christensen, H., Cohen Silver, R., Everall, I., Ford, T., John, A., Kabir, T., King, K., Madan, I., Michie, S., Przybylski, A. K., Shafran, R., Sweeney, A., … Bullmore, E. (2020). Multidisciplinary research priorities for the COVID-19 pandemic: A call for action for mental health science. The Lancet Psychiatry, S2215036620301681. https://doi.org/10.1016/S2215-0366(20)30168-1 #### URL 24. psyarxiv.com psyarxiv.com 1. Bailey, A., Knobe, J., & Newman, G. (2020). Value-based Essentialism: Essentialist Beliefs About Non-biological Social Groups [Preprint]. PsyArXiv. https://doi.org/10.31234/osf.io/m2eby #### URL 25. www.foliomag.com www.foliomag.com 1. The world’s largest exhibitions organizer, London-based Informa plc, outlined on Thursday morning a series of emergency actions it’s taking to alleviate the impact of the COVID-19 pandemic on its events business, which drives nearly two-thirds of the company’s overall revenues. Noting that the effects have been “significantly deeper, more volatile and wide-reaching,” than was initially anticipated, the company says it’s temporarily suspending dividends, cutting executive pay and issuing new shares worth about 20% of its total existing capital in an effort to strengthen its balance sheet and reduce its approximately £2.4 billion ($2.9 billion) in debt to £1.4 billion ($1.7 billion). Further, Informa says it’s engaged in “constructive discussions” with its U.S.-based debt holders over a covenant waiver agreement. Informa Group, que posee editoriales como Taylor & Francis, de Informa Intelligent Division toma medidas en su sector de conferencias y eventos. Provee dos tercios de sus ingresos totales, 2.9 billion dólares. Emite acciones y para el mercado norteamericano acuerdos de deuda. Mientras la parte editorial que aporta un 35% de los ingresos se mantiene sin cambios y con pronósticos estables y sólidos. Stephen Carter CEO #### URL 26. Nov 2019 27. vidooly.com vidooly.com 1. The LinkedIn groups is a dedicated forum on LinkedIn, where professionals from the same industry or same interest discuss relevant information about a product, topic, or service. #### URL 28. courses.lumenlearning.com courses.lumenlearning.com 1. Although “group” and “team” are often used interchangeably, the process of interaction between the two is different. Beebe & Mottet (2010) suggest that we think of groups and teams as existing on a continuum. On one end, a small group consists of three to fifteen people who share a common purpose, feel a sense of belonging to the group, and exert influence on each other (Beebe & Masterson, 2009). On the other end, a team is a coordinated group of people organized to work together to achieve a specific, common goal (Beebe & Masterson, 2009). A team has members with specific roles to play--quarterback in football, software design engineer in a business setting, whereas groups don't necessarily have people with specialities. #### URL 29. Apr 2019 30. poseidon01.ssrn.com poseidon01.ssrn.com 1. Text-Based SourcesSummary of the Final Report of QTD Working Group II.1Nikhar Gaikwad, Veronica Herreraand Robert Mickey* # ShareKnowledge #QualitativeUAEM Al analizar el contenido del texto "Text-Based Sources" (American Political Science Association Organized Section for Qualitative and Multi-Method Research, Qualitative Transparency Deliberations, Working Group Final Reports, Report II.1); hemos determinado algunas reflexiones que deseamos compartir. 2. The report describes several types of transparency-enhancing practices relevant to text-based sources. Some of these practices improve transparency regarding the process of generating evidence.Clearly identifying asource's locationhelps other researcherslocate and evaluate evidence, expanding the scope and reach of one's research Cabe destacar que las recientes discusiones que han surgido sobre la transparencia de la investigación cualitativa en la Ciencia Política han sido un tema de debate serio, de tal manera que ha sido necesario implementar un código de ética con el objetivo de aumentar y reforzar la transparencia en las fuentes basas en texto. 3. rk.Drawing on QTD deliberations, existing scholarly work, and our own reflections, we discuss a range of transparency-enhancing practices and technologies, the costs and risks attendant with each, and their potential benefit Sabemos que gran parte de la investigación considerada "cualitativa" se basa, en el análisis de documentos (fuentes basadas en texto) y que todo este proceso implica un costo, pero es necesario y garantiza una mejor calidad de la información. Por esta razón, sostenemos que la transparencia en la información que empleamos de las fuentes basadas en texto ayuda a otorgar mayor claridad en el proceso de investigación y permite adquirir nuevos conocimientos. 4. Recent discussions about transparency in political science have become fraught with concernsover replicability or even scholarly misconduct. The report of the QTD Working Group on Text-Based Sources emphasizesinstead that the ultimate goal of augmenting transparency is to increase our ability to evaluate evidentiary claims, build on prior research, and produce better knowledge. Consideramos que de suma importancia implementar la transparencia en la metodología de selección de las fuentes basadas en texto que utilizamos para compartir información pública. Para ello, es necesario hacer un proceso analítico de deliberación de las fuentes basadas en texto; esto consiste en evaluar si los datos que se compartirán abiertamente son "verídicos". #### URL 31. Jan 2019 32. Local file Local file 1. I mean they only had two when I joined Ether and Bitcoin, but they were pretty selective compared to a lot of exchanges and I heard some good things from other friends who had been using it. So I trusted that. 2. And I realize that late. Um, but I still did get out at a reasonably okay time because I really like all my friends.Derek:00:59:32 I really use what my friends are saying on crypto 3. ah, and I've heard from a lot of traders, like it definitely is an evolving process. 4. So I followed that. I followed this one trader. He has 100,000 followers on twitter. He's just scalper uh, margin trader on Big Phoenix. Amazing. Gives amazing videos. Incredible. Uh, I follow him a lot. Um, I guess my style would be most closely to his, I think then definitely Rsi. 5. Yeah, so I do not have a background in coding, uh, and on on trading view, they have like a social, I really like their community. It's definitely a big community of like higher tier 6. eetups. I'm trying to really go to meetups and meet other people and I feel like during the bear market, the quality of the meetups really increases because the people that are actually really interested not just for the price before everything else are showing up. 7. Twitter is my go to and people post a news articles from like ccn from what does it coined, ask a bunch of these crypto news things and they're great. Uh, you know, I take them with a grain of salt because whatever, there's a lot of like fake news. 8. Um, yeah, it was definitely a on twitter before I really understood what ta was. And I would see people post all these charts and I would always just be taking their word for it. And you know, people post different types of charts and different layouts. discovered coinagy through social media 9. Uh, I'm on definitely on twitter, all scrolling through and guys and people post interesting theories. 10. uh, Discord is the best and telegram. Those two. Sometimes people will do their own members area by using like click funnels or something like that. Discord is the easiest because you can separate channels. Um, and it's free. Uh telegram. I've like specific groups just because they've built in functionality that usually triggers by phone or at least more as from an notification standpoint. Where sometimes it gets lost in Discord 11. Um, three commas had been tested by another people that I guess I was kind of following the social proof justification that enough people were in it so that made me more confident in using it. 12. I don't personally like blindly entering trades that... 13. will basically want to follow along with why they're notating that as a potential trade board or following it just to see how it plays out, uh, to basically use it as a learning 14. Most of it is based around talking to people who are more competent or just more comfortable in a specific trading strategy. 15. or that we're looking out for. Would need us to watch after a trade or to be looking out for a trade. Some of them have signals, like targets for traditional markets. They might have just mentioned, hey, if someone's running the group, they might've mentioned, hey, this is a point where I'm looking to enter short. Uh, so tell where they're looking to essentially place to stop, um, crypto groups Okay. Okay. Yeah, yeah. Okay. Uh, either injured but stop here. And then yourself targets are one, two, three, but it's less structured traditionally. 16. Uh, yeah, I'm in a few groups. There's a couple of the crypto focused, uh, the also have been just, I wouldn't say [inaudible], but have put more emphasis on, you know, since we're technical traders, there's a reason not to take advantage of, uh, the market opportunities and traditional as they pop up. So we've been focused mainly on just very few inverse etfs to short the s&p to short some major Chinese stocks, um, doing some stuff with, uh, oil, gas. And then there's some groups that I'm in that are specifically focused on just traditional, uh, that are broken up or categorized by what they're trading. #### Annotators 33. wendynorris.com wendynorris.com 1. In summary, the ECGs study from DRC showed me that the use of disaster periods created analytical problems. The categories often over­lapped, different groups perceived and experienced the disaster phases differently, and individuals or groups defined differently the actual or potential event Mismatch between disaster phase classifications and temporal periods of those phases as experienced by individuals/groups. 2. emergent citizen groups (ECGs) in disasters (e.g., Neal I 984, Quarantelli 1985) How are emergent citizen groups defined? How is it similar/different than DHNs? Get these papers. #### URL 34. Sep 2018 35. glcateachlearn.org glcateachlearn.org 1. In other words, a student may have decided that they want to remain a peripheral member. Interesting shift in perspective here. From a non-choice, the student is given some space to make an active choice. (A choice which may disappoint and impoverish others who do choose to fully engage the community, but their choice to make nonetheless.) #### URL 36. cnx.org cnx.org 1. City officials can actually help if they go out into the streets and ask real people what actually is going on. Something on blogs and on polls arent true, they dont always speak the truth. If they were to go out to communities and build relationships with people, they would have a clearer understanding of what is going on. 2. I dont believe some of this, blacks never had a voice during . That time if they were to speak up during that time they would often get punished. Blacks had no say in there freedom, slavery wasn't abolished to help slaves, Abraham Lincoln didn't do it out of the kindness out of his heart. 37. hypothes.is hypothes.is 1. I believe people in sometimes feel they have no voice are say. There are pathways were people try there best to find change and still see no result i believe to have to change we have to write congress men and people in the government letters to how we may feel. We must be aware together but, its better sometimes to be the odd person out the bunch. It takes one person doing something different to see results. 38. Aug 2018 39. www.fao.org www.fao.org 1. El Mecanismo de Tecnologías Limpias y El Mercado de Bonos de Carbono se encuentran disponibles para el establecimiento de convenios multilaterales para aprovechar las oportunidades de captura de carbono por las plantaciones forestales del norte de México. Un plus en la investigación, que podría aplicarse en mi región en plantaciones forestales. 2. Los datos dasométricos de 25 cuadrantes de 20m x 30m fueron levantados en las plantaciones de los ejidos mencionados anteriormente. La edad de las plantaciones varió desde 6 hasta 20 años para tener una crono secuencia definida y poder modelar en tiempo el crecimiento en volumen, área basal y densidad a nivel del rodal. Podría considerar la edad de la plantación a estudiar y realizar dicha cronosecuencia. #### URL 40. Feb 2018 41. pmnerds.com pmnerds.com 1. The Bottom Line is that you will benefit from using the community group Unlike other approaches to learning new PM concepts that span many disciplines and competencies, we help you focus on your strengths and concerns within groups, while developing a holistic solution, that optimally increases your competitive advantage. Steps to Creating a Group: • Join the Community • Invite Others to Join #### URL 42. pmnerds.com pmnerds.com • To create a Group, select "New Group" • Select a Category that best fits the purpose of your Group. • Give your Group a title, a description • Make the Group Private or Public. • You can also select " Invite Only Group", so only people that are invited can join and see the group. 2. Groups Unlike other approaches to learning new concepts that leave you to your own resources, Groups provide a safe environment for meaningful open discussions, shared experiences and assets, to help you overcome change adoption hurdles. To participate in Groups: • Join the Community • Join groups that interest you • And participate in the Group discussions #### URL 43. Dec 2017 44. mattheneus-healthcare.com mattheneus-healthcare.com 1. G-DRG ( German Diagnosis Related Groups ) "Since patients within each category are clinically similar and are expected to use the same level of hospital resources" #### URL 45. Oct 2017 46. Local file Local file 1. Kamler, Barbara. 2008. “Rethinking Doctoral Publication Practices: Writing from and beyond the Thesis.” Studies in Higher Education 33 (3): 283–94. doi:10.1080/03075070802049236. #### Annotators 47. Sep 2017 48. www.mnemotext.com www.mnemotext.com 1. Pill is now, and much like “mere” tools such as cellphones or computers. This part of the text is a good example of how technology has become transparent because cellphones and other computers are used so regularly that the knowledge of how to use them, are second nature; however, social groups that are excluded from this idea are the lower class whom cannot afford such luxuries. Most of these examples seem to be geared towards the upper middle class. #### URL 49. Feb 2016 50. scripting.com scripting.com 1. The feed is how stuff enters their content system. But the feed itself is outside, leaving it available for other services to use. It's great when this happens, rather than doing it via a WG that tend to go on for years, and create stuff that's super-complicated, why not design something that works for you, put it out there with no restrictions and let whatever's going to happen happen. Interesting approach for hypothes.is to consider? #### URL 51. Dec 2015 52. groups.diigo.com groups.diigo.com 1. Page level notes: • General description of group, including an icon. • Easy to get the content via RSS. • Easily sortable stream: recent, popular, filter... • tag "cloud"--tags link to text with tags • list of members (with avatars) #### URL 53. Sep 2015 54. jonudell.net jonudell.net 1. I'm invited to toggle the dropdown (which implies filtering) to Public. Do so. What does that mean in this context? Nothing. I expected the dropdown to filter the annotation list as well. The fact that it doesn't was surprising. #### URL 55. schepers.cc schepers.cc 1. historical political boundaries of the native Americans We view the world in these simplified 2D representations of clearcut political entities. Fredrik Barth and Benedict Anderson have said quite a few important things about these issues of maps and boundaries. #### URL 56. Feb 2015 57. jonudell.net jonudell.net 1. Questions Sorry cannot read the questions in full as I cannot get my sidebar to collapse. I cannot speak for developers but from my perspective the page-based group seems a good starter because it seems more straight forward and is less likely to change in the future but still shows off a hint of the full capabilities of hypothesis. 2. Draft UX The mock ups are great. The copyright notice on the bottom of the annotations that the "Annotations can be used freely by anyone for any purpose" sort of defeats the idea of private readership of groups. 3. If you have the link you can participa Still not convinced that sharing a link will be secure enough for the initial audience of lawyers, educators, researchers that you had in your user stories. Would having the link illicitly allow you to view the annotations without being detected. 4. Both are private to participants This seems a sensible first step as a sort of soft launch but the capability of future progression to groups with limitations on annotators yet public readership will need to be considered in development. Will be an important part of groups if groups are to to become important in establishing credibility or reputation in publicly visible groups with a limitations on annotators. 6. Annotations show in stream Is this stream the annotations in the sidebar or a stream that is independent of the sidebar. This independent stream will be important for inter-page groups if used by say research or educational groups.. 7. Group name shows on cards Perhaps "{user} for {group} on {doc_title}" and "{user} for {group}" in the sidebar. 8. leave a group Does "leaving" amount to "unsubscribing"? Such that, leaving a group simply means it won't: 1. show up in your sidebar/stream content 2. won't send you email for additions in that group 3. won't be in your list of groups to publish into I.e. if I wanted back in, I could find that email with the invite link and re-join (or perhaps there's UI that let's me re-join past groups). 9. email pops up a new email with the subject set Pretty simple with mailto:{email}?subject="Annotate this"&body="http://..." Not all mail clients support body (iirc), but most/all support subject`. 10. Annotations show in stream Which stream? The public one? or a custom one? #### URL 58. www.pdf995.com www.pdf995.com 1. I have not explained this part well. It is important so I will try again. In my opinion the way groups are set up is crucial to the development of reputation for annotators on Hypothesis. Annotators’ reputations will be strongly related to the Groups they belong. Trusted groups will need to have private annotator membership that is extended by invitation only but these groups need to be able to choose between public or private readership. Other groups will have different requirements for annotators and readers in terms of the public, private, link mix. Bridge #### URL 59. Dec 2013 60. docs.webplatform.org docs.webplatform.org 1. annotation modes This might be enabled by the planned "groups" feature, along with common hashtags for that group added through the group admin interface.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20314204692840576, "perplexity": 10663.628388837991}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400192778.51/warc/CC-MAIN-20200919142021-20200919172021-00023.warc.gz"}
https://blogs.ams.org/beyondreviews/category/mathematics-in-the-news/
# Category Archives: Mathematics in the news ## Mathematics and epidemiology Mathematics is a useful tool in studying the growth of infections in a population, such as what occurs in epidemics.  A simple model is given by a first-order differential equation, the logistic equation, $\frac{dx}{dy}=\beta x(1-x)$ which is discussed in almost any … Continue reading Posted in Mathematics in the news | Leave a comment ## 3 After posting about Booker and Sutherland’s cool expression of 42 as a sum of three cubes, Drew Sutherland wrote to say that they found a new way to write 3 as a sum of three cubes: \$569936821221962380720^3 + (-569936821113563493509)^3 +  … Continue reading Posted in Announcements, Mathematics in the news | Leave a comment ## 42 The number 42 is famous for its occurrence in The Hitchhiker’s Guide to the Galaxy. In 2032, Adele might come out with a new album with 42 as its title.  But today, the fame of the number 42 has to … Continue reading Posted in Announcements, Mathematics in the news | Leave a comment ## Karen Uhlenbeck wins the 2019 Abel Prize Karen Uhlenbeck is being awarded the 2019 Abel Prize.  It is a remarkable award for a remarkable mathematician.  Uhlenbeck did fundamental work in a quickly developing area of mathematics at an early stage of its development.  I was a graduate … Continue reading Posted in Mathematics in the news | Leave a comment ## Quanta Magazine Quanta Magazine, from the Simons Foundation, has been publishing some excellent articles about mathematics.  It is not a research journal, so Mathematical Reviews doesn’t cover it.  Nevertheless, if you want to dig deeper into some of the mathematical issues discussed … Continue reading Posted in Mathematicians, Mathematics in the news | Leave a comment ## AMS Prizes and Awards – 2017 The AMS is announcing the winners of some of the major prizes that they will award at the upcoming Joint Mathematical Meetings in Atlanta (January 4-7, 2017).  The Joint Prize Session, where prizes from the various participating societies will be presented, … Continue reading Posted in Mathematicians, Mathematics in the news | Leave a comment ## Mathematics for Democracy There is mathematics in the New York Times today (December 6, 2015).  Not research-level mathematics, but math nonetheless.  Specifically, there is an article about using two simple statistical tests as indicators of gerrymandered voting districts.  By themselves, the tests don’t … Continue reading Posted in Mathematics in the news | Leave a comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3862290382385254, "perplexity": 2703.5084999665596}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141176922.14/warc/CC-MAIN-20201124170142-20201124200142-00638.warc.gz"}
https://tex.stackexchange.com/questions/428259/tikz-use-calc-library-in-3d-space
# TikZ, use calc library in 3D space The TikZ library calc can handle angle values: \documentclass[border=2mm,tikz]{standalone} \usepackage{tikz} \usetikzlibrary{arrows.meta,calc,intersections} \begin{document} \begin{tikzpicture} \draw[->] (-3,0,0) -- (3,0,0) node[below right] {$x$}; \draw[->] (0,-3,0) -- (0,3,0) node[above right] {$y$}; \draw[->] (0,0,-3) -- (0,0,3) node[below right] {$z$}; \coordinate (o) at (0,0,0); \coordinate (a) at (3.1,0,1.2); \draw[dashed] (a) -- (o); \path (a) -- coordinate[pos=0.32] (b) (o); \draw [thick,-{Straight Barb},orange] (a) -- ($(a)!1.2cm!90:(o)$) coordinate[label={[black]above left:c}] (c); \draw[thick,-{Straight Barb},gray] (a) -- node[pos=0.7, below=0.35em] {b} (b); \draw [thick,-{Straight Barb},red] (a) -- ([shift={(0,1.5,0)}]a) coordinate[label={[black]above right:d}] (d); \draw (d) -- ($(c)!1.4cm!90:(a)$) coordinate (h); \end{tikzpicture} \end{document} The output is: Giving the angle in ($(c)!1.4cm!90:(a)$) any positive or negative value, h will always be in the (x,z) plane. In this case, the segment c--h should instead be vertical (parallel to y) with respect to the (x,z) plane. So, how can the expression ($(c)!1.4cm!90:(a)$) be modified in order to accomplish this? As noticed in the comments, the above code does not use tikz-3dplot. If, however, this package can provide a solution besides the traditional TikZ-only approach, it is ok and it can be used here. ## The requested elucidation This question is part of a series (1st question, 2nd question, 3rd question, 4th question, 5th question). It is asked to depict a triad of mutually orthogonal vectors. One of them (b, in the image above) is in the (x,z) plane in a general position and points to the origin; the plane generated by the vectors c and d should be highlighted. This triad should be depicted in a 3D space, and I should be able to arbitrarily rotate the space orientation. The end configuration of the axes should be the one shown in this answer, with the z axis pointing right. The tool through which this can be obtained is not important: it can be TikZ, as well as tikz-3dplot, as well as a combination of them. Whether tikz-3dplot must be used or not is part of the question: it can sometimes be the only tool, it can sometimes be just an alternative. So far, I don't know tikz-3dplot enough. All this summary wasn't written at the beginning because it was difficult, if not impossible, for me to work and make all the attempts on this picture directly. I was not able to provide any example or failed attempts regarding the general picture. The question would have certainly disapproved and criticized (as it was the 1st question). The 2nd question was actually ambiguous, because I didn't notice the 45° line. Given that, I always tried to provide quick and precise questions, after making some attempts. Thanks to all those who try to work on these images. I hope that this meets as much as possible the clarification that has been asked in the comments. • If (c) and (a) were points in a 3d space there would not be such point as ($(c)!1.4cm!90:(a)$) because in 3d there is no sens to say "rotate around (c) at 90° in the positive direction". If you want a point to be 1.4cm above (c) you can use ([yshift=1.4cm]c). A point with "coordinate" (x,y,z) is a 2d point that is a (non orthogonal) projection of this 3d point. – Kpym Apr 24 '18 at 12:33 • You are tagging this question tikz-3dplot, but not using it. With this package, you can work in any plane, and then statement "rotate around (c) by 90° in the positive direction" does make sense. (Note that I never used this calc syntax in tikz-3dplot my self, so I am not 100% sure that this is a good advice. The command \draw (a) rectangle (b); does not yield a rotated rectangle.) – user121799 Apr 24 '18 at 13:40 • @marmot No, whatever 3d package you use "rotate around (c) by 90° in the positive direction" make no sens. You can rotate around an oriented axis, but not around a point in 3d. – Kpym Apr 24 '18 at 14:25 • @Kpym Hmmh, I guess that is debatable, depending on whether you interpret (c) as vector/axis or point. In the latter case, you are right, but not on the former. – user121799 Apr 24 '18 at 14:37 • @BowPark if you stock the coordinates of (a) in two macros \x and \z then the coordinates of (c) would be (a multiple of) (-\z‚0‚\x). – Kpym Apr 26 '18 at 10:57 Not a real answer, but perhaps the first step. If you load the library 3d, you can specify the plane in which you want to work. Here is an example. \documentclass[border=2mm,tikz]{standalone} \usepackage{tikz} \usetikzlibrary{arrows.meta,calc,intersections,3d} \begin{document} \begin{tikzpicture} \draw[->] (-3,0,0) -- (3,0,0) node[below right] {$x$}; \draw[->] (0,-3,0) -- (0,3,0) node[above right] {$y$}; \draw[->] (0,0,-3) -- (0,0,3) node[below right] {$z$}; \coordinate (o) at (0,0,0); \coordinate (a) at (3.1,0,1.2); \draw[dashed] (a) -- (o); \path (a) -- coordinate[pos=0.32] (b) (o); \draw [thick,-{Straight Barb},orange] (a) -- ($(a)!1.2cm!90:(o)$) coordinate[label={[black]above left:c}] (c); \draw[thick,-{Straight Barb},gray] (a) -- node[pos=0.7, below=0.35em] {b} (b); \draw [thick,-{Straight Barb},red] (a) -- ([shift={(0,1.5,0)}]a) coordinate[label={[black]above right:d}] (d); \begin{scope}[canvas is yz plane at x=1] \draw (d) -- ($(c)!1.4cm!90:(a)$) coordinate (h); \end{scope} \end{tikzpicture} \end{document} As you see, the syntax is rather self-explanatory. I am however struggling to understand what you precisely want since, as pointed out by @Kpym, your instructions are somewhat ambiguous. Yet I do hope that this example will help you achieve what you want. Notice also that I spent some time translating your code to tikz-3dplot, but did not find an elegant way to do that. Obstacles include the fact that tikz-3dplot doesn't make it too straightforward to make the y-axis point up, it is using Euler angles, which makes it hard to do a rotation about the x-axis, and that there is a reflection required to make the axes match. These are not conceptual problems, but I was not able to produce something elegant either. • Thank you! There's one thing I don't understand: is \draw (d) -- ($(c)!1.4cm!90:(a)$) coordinate (h); drawing in the (y,z) plane? If yes, this is in general not the plane of the vectors c and d. Thank you also for having tried with tikz-3dplot. I edited the question to add the requested details. In the final result, the z axis should point right and the x axis up, so maybe even this orientation is troublesome. If you think the code is too intricated, tikz-3dplot is not indispensable. – BowPark Apr 24 '18 at 18:09 • You are right, and that's why I said that this is not a full answer. One would have to carefully adjust the plane in such a way that the respective points are in. I think one may use these gorgeous macros to achieve this, or, if that's not the case, write something that does it. And I have mixed feelings about tikz-3dplot. On the one hand, I really love this package and use it a lot, but it has also some limitations starting with the fact that in the main coordinate system the z-axis always points up. – user121799 Apr 24 '18 at 18:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7454368472099304, "perplexity": 800.6420754244197}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178389472.95/warc/CC-MAIN-20210309061538-20210309091538-00238.warc.gz"}
https://quant.stackexchange.com/questions/42930/whats-the-logic-behind-3-10-ust-yield-inversion-predicting-recession/42931
# What's the logic behind 3-10 UST yield inversion predicting recession? Is there causality, behavioral or logical explanation behind this indicator or is it just purely an observation based on correlation? My guess is that there are existing derivatives with clauses that force them to take action that results in inverting the curve because it makes no sense to receive less for 10s than 3s. • the real interest rate measures the rate at which consumption () is expected to grow over a given horizon. A high 1-year yield signals that growth is expected to be high over a one-year horizon. A high 10-year yield signals that annual growth is expected, on average, to be high over a ten-year horizon. If the difference in the 10-year and 1-year yield is positive, then growth is expected to accelerate. If the difference is negative--i.e., if the real yield curve inverts--then growth is expected to decelerate. andolfatto.blogspot.com/2018/09/… Dec 6, 2018 at 1:39
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9128680229187012, "perplexity": 1444.9086245425553}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950373.88/warc/CC-MAIN-20230402012805-20230402042805-00089.warc.gz"}
http://math.stackexchange.com/questions/52266/the-leap-to-infinite-dimensions/52269
# The leap to infinite dimensions Extending this question, page 447 of Gilbert Strang's Algebra book says What does it mean for a vector to have infinitely many components? There are two different answers, both good: 1) The vector becomes $v = (v_1, v_2, v_3 ... )$ 2) The vector becomes a function $f(x)$. It could be $\sin(x)$. I don't quite see in what sense the function is "infinite dimensional". Is it because a function is continuous, and so represents infinitely many points? The best way I can explain it is: • 1D space has 1 DOF, so each "vector" takes you on "one trip" • 2D space has 2 DOF, so by following each component in a 2D (x,y) vector you end up going on "two trips" • ... • $\infty$D space has $\infty$ DOF, so each component in an $\infty$D vector takes you on "$\infty$ trips" How does it ever end then? 3d space has 3 components to travel (x,y,z) to reach a destination point. If we have infinite components to travel on, how do we ever reach a destination point? We should be resolving components against infinite axes and so never reach a final destination point. - Do you know anything about Fourier series? –  Matt Calhoun Jul 19 '11 at 0:20 there are different notions of "basis". the algebraic one (sometimes called a hamel basis) is a collection of independent vectors st every vector can be written as a finite linear combination of basis elements. in something like $L^2(S^1)$ you might consider the orthonormal basis $\{\cos(nx), \sin(nx) : n=0,1,2,3,...\}$ where $L^2$ functions can be written as infinite linear combinations (fourier series) of the basis functions. –  yoyo Jul 19 '11 at 0:45 @Theo Buehler: Hm? –  Christian Blatter Jul 19 '11 at 12:08 @Theo But in $f(x)=\sin(x)$, say $x=1$ (basis=1), then $f(x)=\sin(1)$ which is a value, not a function –  bobobobo Jul 19 '11 at 12:59 @Christian Blatter: Oh... that was a major lapse. (How could 4 people agree?) `@bobobobo: Sorry about that. GleasSpty and Agustí expand on what I was trying to say, but correctly. –  t.b. Jul 19 '11 at 14:20 One thing that might help is thinking about the vector spaces you already know as function spaces instead. Consider $\mathbb{R}^n$. Let $T_{n}=\{1,2,\cdots,n\}$ be a set of size $n$. Then $$\mathbb{R}^{n}\cong\left\{ f:T_{n}\rightarrow\mathbb{R}\right\}$$ where the set on the right hand side is the space of all real valued functions on $T_n$. It has a vector space structure since we can multiply by scalars and add functions. The functions $f_i$ which satisfy $f_i(j)=\delta_{ij}$ will form a basis. So a finite dimensional vector space is just the space of all functions on a finite set. When we look at the space of functions on an infinite set, we get an infinite dimensional vector space. - Do you mean $T_n$ is an n-tuple? (So ${ a_1, a_2 ... a_n }, a_n \epsilon \mathbb{R}$ )? –  bobobobo Jul 19 '11 at 15:39 @bobobobo: No. I mean $T_n$ is a set of size $n$. Any set of size $n$. It could represent the vertices of a graph, in which case we are talking about the vector space of functions on a graph. Or it could be the elements of a group. Above I used the numbers $1$ to $n$ for simplicity. We could have $T=\{\text{cat}, \text{ dog}, \text{ rat} \}$. In this case, the space of all functions from $T$ to $\mathbb{R}$ is a three dimensional vector spaces over $\mathbb{R}$. A basis would be the three delta functions. –  Eric Naslund Jul 19 '11 at 15:51 This is a nice answer, but I still haven't found what I'm looking for –  bobobobo Jul 19 '11 at 23:16 I would also like to add to Eric's answer (it turned out that this was too long to be just a comment) that in general it's probably not a good idea to think of a vector as defined in terms of its components. Rather, one should probalby think of a vector as an element of an abstract vector space, and then, once a basis is chosen, you can represent the vector in that basis by its components with respect to that basis. If the (algebraic) basis is finite, then you can write the coordinates as usual as $(v_1,\ldots ,v_n)$. Simiarly, if the (algebraic) basis is countably infinite, the vector can be represented by its components as $(v_1,\ldots ,v_n,\ldots )$. In general, if the (algebraic) basis is indexed by an index set $I$, the components of a vector will be a function $f_v:I\rightarrow F$, where $F$ is the field you're working over. In the second example you posted above, you can take $V$ to be the set of all bounded functions on $\mathbb{R}$ and you can take $F=\mathbb{R}$. Then, for each $x_0\in \mathbb{R}$, you may define the function $$\delta _{x_0}(x)=\begin{cases}1 & \text{if }x=x_0 \\ 0 &\text{otherwise}\end{cases}$$ It turns out that the collection $\left\{ \delta _{x_0}|\, x_0\in \mathbb{R}\right\}$ forms an algebraic basis for $V$. This collection is naturally indexed by $\mathbb{R}$, and so by choosing this basis you can think of a function in $V$ as represnted by a function from $\mathbb{R}$ (the indexing set) to $\mathbb{R}$ (the field). In this case, that function was $\sin (x)$, which, because of how we chose our basis, agrees with the element of $V$ it is trying to represent, namely the original function $\sin$. Hope that helps! P.S.: I use the term algebraic basis to distinguish it from a topological basis, which is often more useful in infinite-dimensional settings. - I won't say anything more than Theo and Eric have already said, but... As Eric says, every $\mathbb{R}^n$ can be seen as a space of functions $f: T_n \longrightarrow \mathbb{R}$. That is, the vector $v = (8.2 , \pi , 13) \in \mathbb{R}^3$ is the same as the function $v: \left\{ 1,2,3\right\} \longrightarrow \mathbb{R}$ such that $v(1) = 8.2, v(2) = \pi$ and $v(3) = 13$. So, the coordinates of $v$ are the same as its values on the set $\left\{ 1,2,3\right\}$, aren't they? Indeed, the coordinates of $v$ are the coefficients that appear in the right-hand side of this equality: $$(8.2, \pi , 13) = v(1) (1,0,0) + v(2) (0,1,0) + v(3) (0,0,1) \ .$$ On the other hand, the coordinates of $v$ are its coordinates in the standard basis of $\mathbb{R}^3$: $e_1 = (1,0,0), e_2 = (0,1,0)$ and $e_3 = (0,0,1)$ and we can look at these vectors of the standard basis as functions too -like all vectors in $\mathbb{R}^3$. They are the following "functions": $$e_i (j) = \begin{cases} 1 & \text{if}\quad i=j \\ 0 & \text{if}\quad i \neq j \end{cases}$$ This is an odd way to look at old, reliable, $\mathbb{R}^3$ and its standard basis, isn't it? Well, the point in doing so is to get hold for the following construction: let $X$ be any set (finite or infinite, countable or uncountable) and let's consider the set of all functions $f: X \longrightarrow \mathbb{R}$ (not necessarily continuous: besides, since we didn't ask $X$ to be a topological space, it doesn't make sense to talk about continuity). Call this set $$\mathbb{R}^X \ .$$ Now, you can make $\mathbb{R}^X$ into a real vector space by defining $$(f + g)(x) = f(x) + g(x) \qquad \text{and} \qquad (\lambda f)(x) = \lambda f(x)$$ for every $x \in X$, $f, g \in\mathbb{R}^X$ and $\lambda \in \mathbb{R}$. And you would have a "standard basis" too in $\mathbb{R}^X$ which would be the set of functions $e_x : X \longrightarrow \mathbb{R}$, one for each point $x \in X$: $$e_x (y) = \begin{cases} 1 & \text{if}\quad x=y \\ 0 & \text{if}\quad x \neq y \end{cases} \ .$$ So, you see $\mathbb{R}^3$ can be seen as a particular example of a space of functions $\mathbb{R}^X$ if you see the number $3$ as the set $\left\{ 1,2,3\right\}$: $\mathbb{R}^3 = \mathbb{R}^\left\{ 1,2,3\right\} = \mathbb{R}^\mathbb{T_3}$ and the "coordinates" of a function $f\in \mathbb{R}^X$ are the same as its values $\left\{ f(x)\right\}_{x \in X}$. (In fact, a function $f$ is the same as its set of values over all points of $X$, isn't it? -Just in the same way as you identify every vector with its coordinates in a given basis.) Warning. I've been cheating a little bit here, because, in general, the set $\left\{ e_x\right\}_{x\in X}$ is not a basis for the vector space $\mathbb{R}^X$. If it was, every function $f\in \mathbb{R}^X$ could be written as a finite linear combination of those $e_x$. Indeed you have $$f = \sum_{x\in X} f(x) e_x \ ,$$ but the sum on the right needs not to be finite -if $X$ is not so, for instance. One way to fix this: instead of $\mathbb{R}^X$, consider the subset $S \subset \mathbb{R}^X$ of functions $f: X \longrightarrow \mathbb{R}$ such that $f(x) \neq 0$ just for a finite number of points $x\in X$. Then it is true that $\left\{ e_x\right\}_{x\in X}$ is a basis for $S$. (Otherwise said, $\mathbb{R}^X = \prod_{x\in X} \mathbb{R}_x$ and $S = \bigoplus_{x\in X} \mathbb{R}_x$, where $\mathbb{R}_x = \mathbb{R}$ for all $x\in X$.) - I will try to answer your question about in which sense a space is called infinite dimensional, and how you despite this can reach any destination point. It is a theorem that every vector space $V$ has some basis $B\subseteq V$. This means that every vector $v\in V$ can be written as $v=c_1b_1+\cdots+c_nb_n$ for some scalars $c_1,\ldots,c_n\in\mathbb R$ and some basis vectors $b_1,\ldots,b_n\in B$ and some integer $n$. It is very important to note here that only a finite number of basis vectors were used. So even if $V$ is infinite-dimensional, which means that $B$ contains infinitely many basis vectors $b$, we only make a finite number of "trips" from the origin along some basis vectors. We have infinitely many basis vectors to choose from (these are our "degrees of freedom"), but we choose only a finite number $n$ of them, say a hundred, and travel a scalar multiple of $c_i$ along each (where $i=1,\ldots,n$), in order to reach a vector $v$ in our vector space. Now, it's understandable to be confused about this, because it's difficult to give concrete examples for general infinite dimensional spaces. If we take $V$ as the space functions $f:\mathbb R\to\mathbb R$, it is tempting to think of $f(x)$ as the coordinate of the vector $f$ at the position $x$, in the same way we think of $v(2)=15$ as the coordinate of the vector $v=(7,15,11)\in\mathbb R^3$ at the position $2$. But this doesn't work: if the values $f(x)$ are our only coordinates, how could we "reach" $f=\sin$? We would need to make a "trip" from $0$ to $\sin(x)$ at every $x$, and this involves infinitely many trips, which we're not allowed to do by the definition of a basis. The problem is that even more coordinates than just the $f(x)$ are needed in order to specify a function $f$, or as Agustí Roig put it: the functions $e_x$ (in the notation from his post) are not a basis! It's difficult to visualize any basis for the vector space of all functions $\mathbb R\to\mathbb R$: in fact, one needs the axiom of choice to prove that there exists a basis, and no conrete example can be given. You will have to look at another space if you want to be able to better visualize the coefficients of the vectors. One example is the space $V_0$ of all functions $f:\mathbb R\to\mathbb R$ such that $f(x)=0$ for all but finitely many $x$. Then, in fact, you can view $f(x)$ as the coordinate of $f$ at the position $x$. To reach any function $f\in V_0$, you need to make only a finite number of "trips". -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.926163375377655, "perplexity": 137.1040259932391}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507444774.49/warc/CC-MAIN-20141017005724-00354-ip-10-16-133-185.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/two-sliders-work-and-energy.308940/
# Homework Help: Two Sliders, work and energy 1. Apr 21, 2009 ### dietwater 1. The problem statement, all variables and given/known data Each of the sliders A and B has a mass of 2 kg and moves with negligible friction in its respective guide, with y being in the vertical direction (see Figure 3). A 20 N horizontal force is applied to the midpoint of the connecting link of negligible mass, and the assembly is released from rest with θ = 0°. Determine the velocity vA with which slider A strikes the horizontal guide when θ = 90°. [vA = 3.44 m/s] 2. Relevant equations 1/2 mv^2 F = ma Wp = mgh SUVAT 3. The attempt at a solution When at 0 degrees W=0J At 90 F=20N W = 20xd = 8J Work from cart A = 0.5mv^2 Therefore 16 = mv^2 v = 2 rt2 Or... do i need to add the energy from 20n force and from cart b... 0.5mv^2 (b) + 8J = 0.5mv^2 (A) with F = ma, 20/10 a = 10 therefore v (b) = 2 rt 2 sub this into above eq. 8 + 8 = 0.5mv^2 v = 4 Help! Iv been goin round in circles, clearly im wrong lol can someone explain how i could work this out please
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.839150071144104, "perplexity": 3588.889085305027}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864364.38/warc/CC-MAIN-20180622065204-20180622085204-00373.warc.gz"}
https://www.arxiv-vanity.com/papers/hep-th/0508134/
# To the Fifth Dimension and Back Raman Sundrum Department of Physics and Astronomy The Johns Hopkins University 3400 North Charles Street Baltimore, MD 21218, USA ###### Abstract Introductory lectures on Extra Dimensions delivered at TASI 2004. ## 1 Introduction There are several significant motivations for studying field theory in higher dimensions: (i) We are explorers of spacetime structure, and extra spatial dimensions give rise to one of the few possible extensions of relativistic spacetime symmetry. (ii) Extra dimensions are required in string theory as the price for taming the bad high energy behavior of quantum gravity within a weakly coupled framework. (iii) Extra dimensions give rise to qualitatively interesting mechanisms within effective field theory that may play key roles in our understanding of Nature. (iv) Extra dimensions can be a type of “emergent” phenomenon, as best illustrated by the famous AdS/CFT correspondence. These lectures are intended to provide an introduction, not to the many attempts at realistic extra-dimensional model-building, but rather to central qualitative extra-dimensional mechanisms. It is of course hoped that by learning these mechanisms in their simplest and most isolated forms, the reader is well-equipped to work through more realistic incarnations and combinations, in the literature, or better yet, as products of their own invention. (Indeed, to really digest these lectures, the reader must use them to understand some particle physics models and issues. The other TASI lectures are a good place to start.) When any of the examples in the lectures yields a cartoon of the real world, or a cartoon solution to real world problems, I point this out. The lectures are organized as follows. Section 2 gives the basic language for dealing with field theory in the presence of extra dimensions, “compactified” in order to hide them at low energies. It is also shown how particles of different spins in four dimensions can be unified within a single higher-dimensional field. Section 3 illustrates the “chirality problem” associated with fermions in higher dimensions. Section 4 illustrates the emergence of light scalars from higher dimensional theories without fundamental scalars, computes quantum corrections to the scalar mass (potential), and assesses how natural these light scalars are. Section 5 describes how extra dimensional boundaries (and boundary conditions) can be derived from extra dimensional spaces without boundary, by the procedure of “orbifolding”. It is shown how the chirality problem can thereby be solved. The localization of some fields to the boundaries is illustrated. Section 6 describes the matching of the higher dimensional couplings to the effective four-dimensional long-distance couplings. In Section 7, the issue of non-renormalizability of higher-dimensional field theory is discussed and the scale at which a UV completion is required is identified. Higher-dimensional General Relativity is discussed in Section 8, in partiucular the emergence of extra gauge fields at low energies as well as scalar “radion” fields (or “moduli”) describing massless fluctuations in the extra-dimensional geometry. Section 9 illustrates how moduli may be stabilized to fix the extra-dimensional geometry at low energies. Section 10 describes the unsolved Cosmological Constant Problem as well as the less problematic issue of having a higher-dimensional cosmological constant. Section 10 shows that a higher dimensional cosmological constant leads to “warped” compactifications, as well as the phenomenon of “gravity localization”. Section 11 shows that strongly warped compactifications naturally lead to hierarchies in the mass scales appearing in the low energy effective four-dimensional description. Section 12 shows that when warped hierarchies are used to generate the Planck/weak-scale hierarchy, the extra-dimensional graviton excitations are much more strongly coupled to matter than the massless graviton of nature, making them observable at colliders. Section 13 shows how flavor hierarchies and flavor protection can arise naturally in warped compactification, following from a study of higher-dimensional fermions. Section 14 studies features of gauge theory, including the emergence of light scalars, in warped compactifications. The TASI lectures of Ref. [1] and Ref. [2], and the Cargese lectures of Ref. [3], while overlapping with the present lectures, also contain complementary topics and discussion. The central qualitative omissions in the present lectures are supersymmetry, which can combine with extra dimensions in interesting ways (see the TASI lectures of Refs. [1] and [4]), a longer discussion of the connection of extra dimensions to string theory [5] [6], a discussion of fluctuating “branes” (see Refs. [1] and [3]), and the (very illuminating) AdS/CFT correspondence between some warped extra-dimensional theories and some purely four-dimensional theories with strong dynamics [7] [8] [9]. Phenomenologically, there is no discussion of the “Large Extra Dimensions” scenario [10], although these lectures will equip the reader to easily understand it. The references included are meant to be useful and to act as gateways to the broader literature. They are not intended to be a complete set. I have taken moderate pains to get incidental numbers right in the notes, but I am fallible. I have taken greater pains to ensure that important numbers, such as exponents, are correct. ## 2 Compactification and Spin Unification Let us start by considering Yang-Mills (YM) theory in five-dimensional (5D) Minkowski spacetime,111 Our metric signature convention throughout these lectures is . in particular all dimensions being infinite in size, S = Tr∫d4x∫dx5{−14FMNFMN} (2.1) = Tr∫d4x∫dx5{−14FμνFμν−12Fμ5Fμ5}, where are 5D indices, while are 4D indices. We use matrix notation for so that the gauge field is , where are the isospin Pauli matrices. We will study this theory in an axial gauge, . To see that this is a legitimate gauge, imagine that is in a general gauge and consider a gauge transformation, A′M≡igΩ−1DMΩ, Ω(xμ,x5)∈SU(2), (2.2) where is the gauge coupling. It is always possible to find , such that . Ex. Check that this happens for , where represents the path-ordering of the exponential. Ex. Check that in this gauge, S=Tr∫d4x∫dx5{−14FμνFμν+12(∂5Aμ)2}. (2.3) Let us now compactify the fifth dimension to a circle, so that , where is the radius of the circle and is an angular coordinate . See Fig. 1. We can Fourier expand the gauge field in this coordinate, Aμ(xμ,ϕ)=A(0)μ(x)+∞∑n=1(A(n)μ(x)einϕ+h.c.). (2.4) But now we can no longer go to axial gauge; in general our above will not be -periodic. The best we can do is go to an “almost axial” gauge where is -independent, , where the action can be written S = Tr∫d4x∫π−πdϕR{−14FμνFμν+12(DμA(0)5)2+12(∂5Aμ)2} = 2πRTr∫d4x{−12(∂μA(0)ν−∂νA(0)μ)2+12(∂μA(0)5)2 +∞∑n=1[−12|∂μA(n)ν−∂νA(n)μ|2+n2R2|A(n)μ|2]+O(A3)}, showing that the 5D theory is equivalent to a 4D theory with an infinite tower of 4D fields, with masses, . This rewriting of 5D compactified physics is called the Kaluza-Klein (KK) decomposition. Ex. Show that if is in a general gauge it can be brought to almost axial gauge via the (periodic) gauge transformation Ω(x,ϕ)≡Peig∫ϕ0dϕ′RA5(x,ϕ′)e−igA(0)5(x)ϕ. (2.6) Note that the sum over of the fields in any interaction term must be zero since this is just conservation of fifth dimensional momentum, where for convenience we define the complex conjugate modes, , to be the modes corresponding to . In this way a spacetime symmetry and conservation law appears as an internal symmetry in the 4D KK decomposition, with internal charges, . Since all of the modes have 4D masses, we can write a 4D effective theory valid below involving just the light modes. Tree level matching yields Seff∼E≪1R2πRTr∫d4x{−14F(0)μνF(0)μν+12(DμA(0)5)2}. (2.7) The leading (renormalizable) non-linear interactions follow entirely from the 4D gauge invariance which survives almost axial gauge fixing. We have a theory of a 4D gauge field and gauge-charged 4D scalar, unified in their higher-dimensional origins. This unification is hidden along with the extra dimension at low energies, but for the tell-tale “Kaluza-Klein” (KK) excitations are accessible, and the full story can be reconstructed in principle. Ex. Check that almost axial gauge is preserved by 4D gauge transformations, (independent of ). Our results are summarized in Fig. 2. ## 3 5D Fermions and the Chirality Problem To proceed we need a representation of the 5D Clifford algebra, . This is straightforwardly provided by Γμ≡γμ,  Γ5≡−iγ5, (3.1) where the ’s are the familiar 4D Dirac matrices. Therefore, 5D fermions are necessarily -component spinors. We decompose them as Ψα(x,ϕ)=∞∑n=−∞Ψ(n)α(x)einϕ. (3.2) Plugging this into the 5D Dirac action gives SΨ = ∫d4x∫dx5¯¯¯¯Ψ(iDMΓM−m)Ψ = ∫d4x∫dx5¯¯¯¯Ψ(iDμγμ−m)Ψ−¯¯¯¯Ψγ5∂5Ψ+ig¯¯¯¯ΨA5γ5Ψ = 2πR∫d4x∞∑n=−∞¯¯¯¯Ψ(n)(iγμ∂μ−m−inRγ5)Ψ(n)+O(¯¯¯¯ΨAΨ). We see that we get a tower of 4D Dirac fermions labelled by integer (no longer positive), with physical masses, m2phys=m2+n2R2. (3.4) For small , this is illustrated in Fig. 3. These fermions are coupled to the gauge field KK tower, again with all interactions conserving 5D momentum, the sum over of all 4D fields in an interaction adding up to zero. At low energies, , we can again get a 4D effective action for the light modes, Seff=E≪1Rm≪1R2πR∫d4x{¯¯¯¯Ψ(0)(iγμDμ−m)Ψ(0)+ig¯¯¯¯Ψ(0)γ5A(0)5Ψ(0)}, (3.5) where the covariant derivative contains only the gauge field . Note that we also have a Yukawa coupling to the 4D scalar, , of the same strength as the gauge coupling, so-called gauge-Yukawa unification. The idea that the Higgs particle may originate from extra-dimensional components of gauge fields was first discussed in Refs. [11]. An unattractive feature in this cartoon of the real world, emerging below , is that the necessity of having Dirac 4-component spinor representations of 5D Lorentz invariance has resulted in having 4-component non-chiral 4D fermion zero-modes. The Standard Model is however famously a theory of chiral Weyl 2-component fermions. Even as a cartoon this looks worrying. This general problem in theories of higher dimensions is called the “chirality problem” and we will return to deal with it later. ## 4 Light Scalar Quantum Corrections Given that light scalars are unnatural in (non-supersymmetric) quantum field theories, it is rather surprising to see a massless 4D scalar, , emerge from higher dimensions. Of course, we should consider quantum corrections to our classical story and see what happens to the scalar mass. From a purely 4D effective field theory viewpoint we would expect there to be large divergent corrections to the scalar mass coming from its gauge and Yukawa couplings, from diagrams such as Fig. 4, δm2scalar∼g2416π2Λ2UV, (4.1) suggesting that the scalar is naturally very heavy. But from the 5D viewpoint is massless because it is part of a 5D gauge field, whose mass is protected by 5D gauge invariance. So the question is which viewpoint is correct? To find out let us first compute the 1-fermion-loop effective potential for [12]. For this purpose we treat as completely constant, and . Then, SΨ=2πR∫d4x∑n¯¯¯¯Ψ(n)(x)[i⧸∂−m−i(nR−a)γ5]Ψ(n)(x), (4.2) where ⧸∂≡γμ∂μ. (4.3) Since is constant, SΨ=2πR∫d4p(2π)4∑n¯¯¯¯Ψ(n)(p)[⧸p−m−i(nR−a)γ5]Ψ(n)(p). (4.4) After Wick rotating, this gives SEΨ=∑n2πR∫d4p(2π)4¯¯¯¯Ψ(n)(p)[⧸p+im+(nR−a)γ5]Ψ(n)(p). (4.5) Integrating out the fermions by straightforward Gaussian Grassman integration, e−Veff = ∏p,ndet[⧸p+im+(nR−a)γ5] (4.6) = From now on, I will simplify slightly by considering a gauge group rather than . All subtleties will come from finite , so we focus on ∂Veff∂R = −∑n∫d4p(2π)4tr⎡⎢ ⎢⎣−nR2γ51⧸p+im+(nR−a)γ5⎤⎥ ⎥⎦ (4.7) = = ∑n∫d4p(2π)44n(n−a)p2+(n−a)2+m2, where we have gone to units in the last line. Naively, this integal and sum over is quintically divergent! So let us carefully regulate the calculation by adding Pauli-Villars fields, in a 5D gauge-invariant manner. These fields have the same quantum numbers as , but have cut-off size masses , some with Bose rather than Fermi statistics. Thereby, (4.8) The regulator terms resemble the physical term except for having cutoff size masses and with signs (determined by the statistics of the regulator field) chosen in such a way that the entire expression converges. The big trick for doing the sum on is to replace it by a contour integral, ∂Veff∂R=∫d4p(2π)4∮Cdz1e2πiz−1(4z(z−a)p2+(z−a)2+m2+Reg.), (4.9) where the contour is shown in Fig. 5, following from the simple poles of the factor and from the residue theorem. The semi-circles at infinity needed to have a closed contour are irrelevant because the integrand vanishes rapidly enough there, precisely because of the addition of the regulator terms. We can deform the contour to that shown in Fig. 6 without encountering any singularities of the integrand, so that by the residue theorem, ∂Veff∂R = −4πi∫d4p(2π)4[a+i√p2+m2e2πiae−2π√p2+m2−1 (4.10) +a−i√p2+m2e2πiae2π√p2+m2−1+Reg.]. We can also write this as ∂Veff∂R = 4π∫d4p(2π)4[√p2+m2−iae2πiae−2π√p2+m2−1−√p2+m2+iae2πiae2π√p2+m2−1 (4.11) +(√p2+m2−ia)⎛⎝e2πiae−2π√p2+m2−1e2πiae−2π√p2+m2−1⎞⎠ where we have just added and subtracted the same quantity in the last two terms (not counting the regulator terms). Note that the overbraced terms cancel out, leaving ∂Veff∂R = 4π∫d4p(2π)4[(−√p2+m2−iae−2πiaRe2πR√p2+m2−1)+c.c. (4.12) −(√p2+m2−ia)]+Reg. where we have put back explicitly, by dimensional analysis. Now let us integrate with respect to , Veff = ∫d4p(2π)4{−4Reln(1−e−2πR√p2+m2e2πiaR) (4.13) +irrelevantconst. In the limit, must be independent of since certainly all potential terms for gauge fields vanish by gauge invariance as usual. This yields the identity, Veff⟶R→∞−4πR∫d4p(2π)4(√p2+m2−ia)+Reg.≡ΛR, (4.14) where is a constant independent of and . Ex. Directly show the cancellation of -dependence in the right hand side of eq. (4.14) by carefully writing out the regulator terms. Using this identity in eq. (4.13) yields (4.15) This formula has some remarkable properties. The first term is indeed highly cutoff dependent, but it does not depend on . The integrand of the second term behaves as for large and therefore the integrals converge. The regulator terms are suppressed by factors and can be completely neglected for (or more formally, for ). We therefore drop the -dependent regulator terms from now on. Finally, combining complex exponentials we arrive at our final result, Veff = ΛR−2∫d4p(2π)4ln(1+e−4πR√p2+m2 (4.16) −2e−2πR√p2+m2cos(2πRgA(0)5)), which is illustrated in Fig. 7 . For small , this can be approximated, Veff ∼ ΛR+∫d4p(2π)4{−4ln(1−e−2πR√p2+m2) −(2πRgA(0)5)2⎡⎢ ⎢ ⎢⎣e−2πR√p2+m2(1−e−2πR√p2+m2)2⎤⎥ ⎥ ⎥⎦ +(2πRgA(0)5)4⎡⎢ ⎢ ⎢⎣e−2πR√p2+m26(1−e−2πR√p2+m2)2+e−4πR√p2+m2(1−e−2πR√p2+m2)4⎤⎥ ⎥ ⎥⎦}. We see immediately that the vacuum has non-vanishing , ⟨A(0)5⟩∼1Rg, (4.18) for . Let us now return from considering gauge group back to . Nothing much changes as far as the loop contribution we have just considered ( is just to be replaced by , where the trace is over gauged isospin) but now there are also diagrams involving gauge loops which contribute to the effective potential. See Fig. 8. By similar methodology, these give a contribution illustrated in Fig. 9. We see that there is a competition now between the contribution from gauge loops which prefers a vacuum at versus the fermion loops which prefer a vacuum at . But clearly if we include sufficiently many identical species of , their contribution must dominate, and . Since is an isovector, a non-zero expectation necessarily breaks the gauge group down to . One can think of this as a caricature of electroweak symmetry breaking where the preserved is electromagnetism and is the Higgs field! We refer to it as “radiative symmetry breaking” (also the “Hosotani mechanism” [12]) because it is a loop effect that sculpted out the symmetry breaking potential. In this symmetry breaking vacuum or Higgs phase, we can easily estimate the physical mass spectrum, mγ(0)=0 mW±(0)∼1R mΨ(0)∼√m2+1R2⟶m→01R mKK∼1R m2Higgs''∼g232π3R3. (4.19) Now this is certainly an interesting story theoretically, but it is surely dangerous to imagine anything like this happening in the real world because we are predicting , and such light KK states should already have been seen. However, there is a simple way to make the KK scale significantly larger than , by making ⟨A(0)5⟩≪1Rg. (4.20) Note that for small we have Veff = VΨ--loopeff+V% gauge--loopeff∼smallaΛR (4.21) +[c1−c2(m)N](Ra)2+[c3+c4(m)N](Ra)4, where the ’s are order one and positive, and depend on the 5D fermion mass , and is the number of species of fermions. Now let us tune to achieve −c1+c2(m)N≡ε≪1 c3+c4(m)N∼O(1), (4.22) from which it follows that there is a local minimum of the effective potential (a possibly cosmologically stable, false vacuum) with A(0)5∼√εgR. (4.23) This yields the hierarchy, mW±∼√εR∼√εmKK. (4.24) ## 5 Orbifolds and Chirality If we ask whether our results thusfar could be extended to a realistic model of nature, with the standard model as a low energy limit, we encounter some big problems, not just problems of detail: a) The previously mentioned chirality problem. b) Yukawa couplings of the standard model vary greatly. Our low energy fermion modes seem to have Yukawa couplings equal to their gauge coupling, a reasonable cartoon of the top quark but not of other real world fermions. A very simple way of solving (a) is to replace the fifth dimensional circle by an interval. The two spaces can be technically related by realizing the interval as an “orbifold” of the circle. This is illustrated in Fig. 10, where the points on the two hemispheres of the circle are identified. Mathematically, we identify the points at or with or . In this way the physical interval extends a length , half the circumference of our original circle. This identification is possible if we also assign a “parity” transformation to all the fields, which is respected by the dynamics (i.e. the action). The action we have considered above has such a parity, given by P(x5)=−x5P(Aμ)=+AμP(A(0)5)=−A(0)5P(ΨL)=+ΨLP(ΨR)=−ΨR, (5.1) precisely when the 5D fermion mass vanishes, . We consider this case for now. Ex. Check that the action is invariant under this parity transformation. With such a parity transformation we continue to pretend to live on a circle, but with all fields satisfying Φ(xμ,−x5)=P(Φ)(xμ,x5). (5.2) That is, the degrees of freedom for are merely a reflection of degrees of freedom for , they have no independent existence. Of course we also require circular periodicity, Φ(xμ,ϕ+2π)=Φ(xμ,ϕ). (5.3) These conditions specify “orbifold boundary conditions” on the interval, derived from the the circle, which of course has no boundary. We can write out the mode decompositions (in almost axial gauge) for all the fields subject to orbifold boundary conditions, Aμ(x,ϕ) = ∞∑n=0A(n)μ(x)cos(nϕ) A5(x,ϕ) = 0Lost Higgs''! ΨL(x,ϕ) = ∞∑n=0Ψ(n)L(x)cos(nϕ) ΨR(x,ϕ) = ∞∑n=1Ψ(n)R(x)sin(nϕ)LostΨ(0)R! (5.4) One unfortunate consequence we see is that has no modes, in particular orbifolding has eliminated our candidate Higgs! The good consequence is for the chirality problem, in that the massless right-handed fermion is eliminated, only the massless left handed fermion mode is left. The low energy effective theory below is just Seff=E≪1R2πR∫d4x{−14(F(0)μν)2+¯¯¯¯Ψ(0)LiDμγμΨ(0)L}. (5.5) With gauge group, if is an isodoublet (so that is an isodoublet), the only possible gauge invariant mass term for the light mode, ΨLiαΨLjβϵijϵαβ, (5.6) vanishes by fermi statistics. Therefore we apparently have a chiral effective gauge theory below . Unfortunately this theory is afflicted by a subtle non-perturbative “Witten anomaly”, so the theory is really unphysical. However, if we consider to be in the isospin representation, we again get a chiral gauge theory, but now not anomalous in any way. Having seen that the chirality problem is soluble, we need to recover our Higgs field. (For discussion of related mechanisms and further references see the TASI review of Ref. [13].) To do this we must enlarge our starting gauge group, from SU(2)≅SO(3) (5.7) to . Gauge fields are conveniently thought of as anti-symmetric matrices, , in the fundmental gauge indices . For simplicity we choose fermions in the fundamental representation, . The action, S = tr∫d4x∫dx5{−14FμνFμν+12(∂5Aμ)2+12(DμA(0)5)2 (5.8) +¯¯¯¯ΨiDμγμΨ−¯¯¯¯Ψγ5∂5Ψ+ig¯¯¯¯ΨiAij(0)5γ5Ψj}, is invariant under the orbifold parity given by P(A^i^jμ)=+A^i^jμP(A^i^j5)=−A^i^j5P(ΨL^i)=+ΨL^iP(ΨR^i)=−ΨR^iP(A^i4μ)=−A^i4μP(A^i45)=+A^i45P(ΨL4)=−ΨL4P(ΨR4)=+ΨR4, (5.9) where Ex. Check by mode decomposition that this leaves 4D massless fields, A^i^j(0)μ,A^i45(0),ΨL^i(0),ΨR4(0), (5.10) that is, a 4D gauge field, a 4D Higgs triplet of , a left-handed fermion triplet of , and a right-handed singlet of . This illustrates how (orbifold) boundary conditions on extra dimensions can break the gauge group of the bulk of the extra dimensions. The low-energy effective theory is given by Seff=E≪1R2πR∫d4x{−14F(0)μνFμν(0)+12(DμA^i45(0))2 +¯¯¯¯ΨL^i(0)(iDμγμΨL(0))^i+¯¯¯¯ΨR4(0)i∂μγμΨR4(0) +ig(¯¯¯¯ΨL(0)^iA^i4(0)5ΨR(0)4+¯¯¯¯ΨR(0)4A^i4(0)5ΨL(0)^i)}. (5.11) This contains 4D gauge theory with two different representations of Weyl fermions Yukawa-coupled to a Higgs field. This again bears some resemblence to the standard model if we think of the fermion as the left and right handed “top” quark. But what of the second problem we identified, (b), that the standard model contains some fermions with much smaller Yukawa couplings than gauge coupling? Such fermions can arise by realizing them very differently in the higher-dimensional set-up. The simplest example is illustrated in Fig. 11, where beyond the fields we have thusfar considered, which live in the “bulk” of the 5D spacetime, there is a 4D Weyl fermion precisely confined to one of the 4D boundaries of the 5D spacetime, say . It can couple to the gauge field evaluated at the boundary if it carries some non-trivial representation, say triplet. This represents a second way in which the chirality problem can be solved, localization to a physical 4D subspace or “3-brane” (a “”-brane has spatial dimensions plus time), in this case the boundary of our 5D spacetime. The new fermion has action, Sχ=∫d4x¯¯¯¯χ\raisebox−5.0ptL^i(x)[i∂μδ^i^j+gA^i^jμ(x,ϕ=π)]χ\raisebox−5.0ptL^j(x). (5.12) At low energies, , this fermion will have identical gauge coupling as the triplet, but it will have no Yukawa coupling, thereby giving a crude representation of a light fermion of the standard model. Well, there are other tricks that one can add to get closer and closer to the real world. Ref. [14] gives a nice account of many model-building issues and further references. I want to move in a new direction. ## 6 Matching 5D to 4D couplings Let us study how effective 4D couplings at low energies emerge from the starting 5D couplings. Returning to pure Yang-Mills on an extra-dimensional circle, we get a low-energy 4D theory, S4eff∼E≪1R2πR∫d4x{−14F(0)μνFμν(0)+12(DμA(0)5)2}. (6.1) The fields are clearly not canonically normalized, even though the 5D theory we started with was canonically normalized. We can wavefunction renormalize the 4D effective fields to canonical form, φ≡A(0)5√2πR,¯Aμ≡A(0)μ√2πR, (6.2) and see what has happened to the couplings, S4eff = 2πR∫d4x{−14(∂μAa(0)ν−∂νAa(0)μ−ig5ϵabcAb(0)μAc(0)ν)2 (6.3) +12(∂μA(0)5−ig5A(0)μA(0)5)2} = ∫d4x{−14(∂μ¯Aaν−∂ν¯Aaμ−ig5√2πRϵabc¯Abμ¯Acν)2 +12(∂μφ−ig5√2πR¯Aμφ)2}. From this we read off the effective 4D gauge coupling, g4eff=g5√2πR. (6.4) Ex. Check that this is dimensionally correct, that 4D gauge couplings are dimensionless while 5D gauge couplings having units of . For experimentally measured gauge couplings, roughly order one, we require g5∼O(√2πR). (6.5) ## 7 5D Non-renormalizability Now, having couplings with negative mass dimension is the classic sign of non-renormalizability, and as you can easily check it happens rather readily in higher dimensional quantum field theory. There are various beliefs about non-renormalizable theories: a) A non-renormalizable quantum field theory is an unmitigated disaster. Throw the theory away at once. Only a few people still hold to this incorrect viewpoint. b) A non-renormalizable quantum field theory can only be used classically, for example in General Relativity where has negative mass dimension. All quantum corrections give nonsense. This incorrect view is held by a surprisingly large number of people. c) The truth (what I believe): Non-renormalizable theories with couplings, , with negative mass dimension can make sense as effective field theories, working pertubatively in powers of the dimensionless small parameter, , where is the mass dimension of . To any fixed order in this expansion, one in fact has all the advantages of renormalizable quantum field theory. There are even meaningful finite quantum computations one can perform. In fact we have just done one in computing the quantum effective potential. But of course there is a price: the whole procedure breaks down once the formal small parameter is no longer small, . At higher energies the effective field theory is useless and must be replaced by a more fundamental and better behaved description of the dynamics. Ex. Learn (non-renormalizable) effective field theory at the systematic technical level as well as a way of thinking. A good place to start is the chiral Lagrangian discussion of soft pions in Ref. [15]. In more detail, perturbative expansions in effective field theory will have expansion parameters, , divided by extra numerical factors such as ’s or ’s. These factors are parametrically order one, but enough of them can be quantitatively significant. These factors can be estimated from considerations of phase space. I will just put these factors in correctly without explanation. Ex. Learn the art of naive dimensional analysis, including how to estimate the ’s and ’s (for some discussion in the extra-dimensional context see Ref. [16]). Use this in your work on extra dimensions. Our findings so far are summarized in Fig. 12. The non-renormalizable effective field theory of 5D gauge theory breaks down when the formal small parameter, , gets large, that is we can define a maximum cutoff on its validity, . The 5D effective theory cannot hold above this scale and must be replaced by a more fundamental theory. Let us say this happens at . From here down to we have 5D effective field theory, and below we have 4D effective field theory. We found it interesting that a Higgs-like candidate emerged from 5D gauge fields because it suggested a way of keeping the 4D scalar naturally light, namely by identifying it as part of a higher-dimensional vector field. But given that , and 4D gauge couplings are measured to be not much smaller than one, we must ask how well this extra-dimensional picture is doing at addressing the naturalness problem of the Higgs. In Fig. 13 we make the comparison with purely 4D field theory with a UV cutoff imposed. We see that in the purely 4D scenario one naturally predicts a weak scale
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.889651358127594, "perplexity": 1031.715825817803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572043.2/warc/CC-MAIN-20220814143522-20220814173522-00116.warc.gz"}
http://math.gatech.edu/node/16914
## Multiplicity of solutions for non-local elliptic equations driven by the fractional Laplacian Series: CDSNS Colloquium Tuesday, January 7, 2014 - 3:05pm 1 hour (actually 50 minutes) Location: Skiles 005 , Beijing Normal University We consider the semi-linear elliptic PDE driven by the fractional Laplacian: \begin{equation*}\left\{%\begin{array}{ll}    (-\Delta)^s u=f(x,u) & \hbox{in $\Omega$,} \\    u=0 &  \hbox{in $\mathbb{R}^n\backslash\Omega$.} \\\end{array}% \right.\end{equation*}An $L^{\infty}$ regularity result is given, using De Giorgi-Stampacchia iteration  method.By the Mountain Pass Theorem and some other nonlinear analysis methods, the existence and multiplicity of non-trivial solutions for the above equation are established. The validity of the Palais-Smale condition without Ambrosetti-Rabinowitz condition for non-local elliptic equations is proved. Two non-trivial solutions are given under some weak hypotheses. Non-local elliptic equations with concave-convex nonlinearities are also studied, and existence of at least six solutions are obtained. Moreover, a global result of Ambrosetti-Brezis-Cerami type is given, which  shows that the effect of  the parameter $\lambda$ in the nonlinear term changes considerably the nonexistence, existence and multiplicity of solutions.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8876519799232483, "perplexity": 4301.069139657976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512382.62/warc/CC-MAIN-20181019082959-20181019104459-00220.warc.gz"}
http://www.eurotrib.com/comments/2009/4/23/53433/4024/19
Welcome to the new version of European Tribune. It's just a new layout, so everything should work as before - please report bugs here. A college friend of mine became the Director of the Bureau of Trade Statistics in the Department of Commerce during the Nixon Administration and held that position into the mid '90s.  One of his primary responsibilities was maintaining the Bureau's computer model of the US economy as adapted to account for international trade.  I was fascinated by the idea of being able to "what if" policy options. With current models only a few people have the opportunity to make such queries. It would seem that the explosion of computing power should make it possible to put such models into open source code that could run on relatively inexpensive platforms as a distributed computing system over a network or the internet.  Run in stand alone mode such a program would enable individuals to model the effects of various policies and events.  But a more interesting model might be a sort of economic "Second Life" where a multitude of separate agents take actions directed primarily towards the benefits to their particular institutional avatar and the combined effects on the individual avatars and the aggregate model of the economy could be observed.  This might require a greatly more powerful processor or it might be possible to accomplish via distributed processing. I have a sense that this would likely give rise to useful management information and could lead to gainful employment for a whole new group of specialists.  Were it set up so that sub sets of the users could devise, employ and test new organizations and techniques in separate economy domains it could allow experimental meta-economics.  If these proved promising, they could be re-integrated into the basic group model and the effects of new agents acting on the existing system could be observed. It might be important to brand, patent and copyright such software and commercialize it so as to make it more difficult to suppress or control by stasis seeking forces, or perhaps I am overly paranoid.  The overarching goal should be to make this as entertaining, exciting and insight generating as possible while protecting the open source nature and the right of all to use and modify the programs.  I do not know how to properly resolve these issues, let alone how to create the models.  But it seems to me that such programs could revolutionize the way we conceptualize and actualize our common culture. As the Dutch said while fighting the Spanish: "It is not necessary to have hope in order to persevere." AFAIK, I'm the only one around here who has done something similar.  In my considered judgment this is a non-trival task.  Or, in English, "Abandon All Hope Ye Who Enter Here." The problem isn't the computing power.  The problem is the software that tells the hardware what to do, when.  To go anywhere beyond a toy you'll eventually have to deal with phenomenological relevance and THAT, my friends, is a real bitch¹.  The usual method is to ignore it by building some flavor of Expert System.  Now the problem with that is well known: anything not specifically programmed is deemed irrelevant which runs you into all kinds of limitations.  The other method, which doesn't work either, is to implement some kind of a randomizer decision process.  In any event, eventually you run smack into the Lyapunov Exponent which can be this: or this: or this: or this: Depending on what is you're dealing with and how you're looking at it at any particular moment with your chosen methodology at that moment.  Each of the above graphs are a different way of looking at - call it - Information Integrity and each of them are "right," for a given value of right-ness, AND each of them are "wrong," for a given value of wrong-ness.  Because this is a factal dimension the right/wrong versus right-ness/wrong-ness is neither linear NOR Euclidean, nor is the seemingly straight forward right/right-ness and wrong/wrong-ness².   Just to complicate everything further the relationship of semiots ('words') to semantic (what the word points at, what it "means" on another axis) is 1:Many.  Technically there's no theoretical limit giving 1:∞ but humans get around this through applying arbitrary limitations, e.g., grammar, and verbal negotiating, a computer don't know either. To quote Drew, "Wheeeeeeeeeeee!" Throwing a lot of people at the problem, the Open Source route, isn't going to do a damn bit of good IF the goal is a explanatory model.  Even people with decades of experience and adequate funding can't - really - do a proper job.  (Ladies and Gentlemen  of the jury I give you the IPCC Model and Report in evidence.)  And it is very much an open question if the most efficient way forward would be to build a Artificial Life system using genetic algorithms (with or without classifiers?) and let the little buggers build the model for you.   Because in the final analysis, We don't know how to write a good model of the economy.  Various good and brainy people have tried over the years and bounced every time.   ---------------------------------------------------------------------------------------------------- -------------- ¹  I could bore you to death with why.  Suffice to say, when faced with n-dimensional streams of data and the odd datum popping in with varying potential and actual affective and effective implications merely doing the comparison operations takes you out of Real Time user response. ²  To Them Wot Know .... yeah, but YOU try it in 50 words or less.  ;-) Skepticism is the first step on the road to truth. -- Denis Diderot I agree. But surely this is an argument FOR distributed citizen decision-making, not against? If the 'expert-few' can't model the economy, then why would the 'amateur-many' decisions be less useful? You can't be me, I'm taken Sven Triloqvist:If the 'expert-few' can't model the economy, then why would the 'amateur-many' decisions be less useful?What, the market knows best, now? Most economists teach a theoretical framework that has been shown to be fundamentally useless. -- James K. Galbraith If you want to describe the brain as a market, then, yes ;-) You can't be me, I'm taken AI metaphors are exceedingly bad when taken literally. What I mean is that "experts can't know as much as everyone collectively" is the key fallacy behind market fundamentalism. Most economists teach a theoretical framework that has been shown to be fundamentally useless. -- James K. Galbraith Generally AI metaphors are lousy even if you take them as AI metaphors. Skepticism is the first step on the road to truth. -- Denis Diderot That's probably what I meant to say :-) Most economists teach a theoretical framework that has been shown to be fundamentally useless. -- James K. Galbraith i think what no-one ever says is that you can't preview, prognosticate, or prophetise consistently about something as inherently nebulous or unpredictable as the sum of millions of individual choices. it's a kind of Everest no-one has climbed, though many gaze at the summit longingly, and many have fallen off its slippery faces... anyone that really consistently pulled it off would be able to name and number their income. with all that computers can do to ease our toil, it's understandable that man should want to use them to find the rosetta stone of economics. until humans become completely linear creatures (too many heading that way imo), those who want to guarantee their winnings merely have to tithe a chunk to the lobbyists, pressuring politicians to hamstring regulators. it can get messy though, which is why everyone would like to find an algorithm to make them easy squillions, like a blackjack 'system'. back to dreaming of the view from everest! "We can all be prosperous but we can't all be rich." Ian Welsh Welcome to My World© i think what no-one ever says is that you can't preview, prognosticate, or prophetise consistently about something as inherently nebulous or unpredictable as the sum of millions of individual choices. No you can't.  What you can do is construct a decision-making environment that simulates¹ what might happen when this, that, or the other comes down.  Now, unregenerate cynics equate "simulate" with "pulling it out of the air."  I prefer to define it as "a systematic procedure resulting in a very informed guess."  :-) Fortunately, there are certain patterns and breakouts - statistically, psycho-socially, and others - to human behavior that can help boundary the problem.  And, with a little bit of intellectual effort and humility, it's possible to get the general thrust of things.  Best example is Exponential Growth, if you've only got x amount of something, and you can only use x-y% of it then using the Growth rate it is possible to determine, more-or-less accurately, when you ain't got no more x.  When you ain't got no more x either you - and everybody else - does without or find a substitute.  That's a general law applicable across the whole range of human activities. Skepticism is the first step on the road to truth. -- Denis Diderot What, the market knows best, now?Well, we have had markets longer than we have had writing and we have only recently tried to analyze these markets with mathematical or computer models.  Our brains have been dealing with complex, emergent phenomena for a long time. As the Dutch said while fighting the Spanish: "It is not necessary to have hope in order to persevere." It's an argument for "democratizing" Model building, I guess.  Book published in the US last year comparing the Experts' forecasting with quality of forecasting.  Turns out the greater the "qualification," the more eddymacation the expert had in the area, the worse the accuracy of predictions.  That, at least, is the rumor.  I need to grab a copy, sit down, and read the sucker.   Skepticism is the first step on the road to truth. -- Denis Diderot That's part of the process. We have a strange idea, which seems apply almost exclusively to economics, politics and business, that experts are good at predictions. In reality almost everyone is better at making predictions than the experts are. There are two things happening here. One is that expertise isn't based on predictive ability, but on following the party line. It's a perfectly soviet system, where talking rubbish gets you immense rewards. The other is the corollary that decisions are made by the people who are least competent to run it - because the goal isn't to make intelligent decisions, it's to maintain power differentials for as long as possible.   If you had a modelling and prediction game you could find individuals who had a talent for intuitive modelling and insight and give them something useful to do. They wouldn't necessarily have to know how to build models out of differential equations, but they would have a better than usual batting average when it came to being right about the future. These people would potentially be very, very valuable. Currently I suspect most of them are wasted in jobs which aren't ideally suited to their talents. Make this a diary. Throw in a discussion of The Club of Rome's The Limits of Growth and their World3. Most economists teach a theoretical framework that has been shown to be fundamentally useless. -- James K. Galbraith I read the Club of Rome report in 1975 and haven't had a chance to look at what they are up to recently; thus, my thoughts about their work are worthless.  Tell you what, why don't you add the CoR stuff making the diary a Mig/AT co-production? Skepticism is the first step on the road to truth. -- Denis Diderot Me like. I have a copy of the 2nd edition somewhere in my mother's flat and would like to get the 3rd edition and the details of the model. Most economists teach a theoretical framework that has been shown to be fundamentally useless. -- James K. Galbraith Great exposition of the difficulties of modelling reality.  Just when you think you have all the key variables nailed down, something else pops up from left field.   If our Government models of the economy were that good, predicting the sub-prime or derivatives crises should have been child's play.  Hell, many people did it without the benefit of a computer model. The problem is that the assumptions you feed in tend to be regurgitated in a new more "objective" quasi-scientific axiomatic form - and come to be seen as the "natural" or "real" world - when all they were is assumptions, deliberate over-simplifications, for the purpose of making the system modelable.   And sometimes, of course, they are much more than just over-simplifications, but deliberate political choices to emphasise some factors and ignore others because it suits a particular political agenda.  See the rise of neo-liberal economics to obscure the insights of Henry George who worked from different assumptions to achieve different objectives. Having said all that, I would be a fan of making computer models/simulations of the economy more generally accessible, if only to better inform political debate.  The key thing to be aware of, howeverare the assumptions which underlie the model - as these can bias the entire result sets towards some options and against others. Thus what is the impact of an increase in income tax on worker productivity is a valid question to ask.  But is it more or less important than: what is the economic impact of increased public expenditure on public health/education/social welfare?  And how do we measure the non-economic benefits of such expenditure? notes from no w here The ability to model a local political-economy has been around for a long time: SimCity does as good a job as one can expect given their limitations and methodology. And since, On January 10, 2008 the SimCity source code was released under the free software GPL 3 license under the name Micropolis. that's where I'd start my research.  If I was going to do something ... which I'm not¹.   SimCity, as is, does a good enough job that it is widely used by urban planners, city planners, & etc. as a tool.  Idly projecting here but it seems to me a neural network-like construct having lots of nodes - 200 or 300 - running SimCity autonomously tossing in cross-communication plus "macro-economic" influences together comprising a first-hack Fitness Landscape would prove interesting.  The easiest way to add complexity to the Fitness Landscape is to build some - say 10 to 20 - analog computers, cross-linkable with A/D and D/A conversion to/from the neural net to simulate continuous data streams which the neural net nodes have to 'chunk' - both for data processing and across time. As an added benefit, you'd get the first major non-von Neumann computer built in a long, long time.  Which, by the way, is something we desperately need as a research tool.   ¹  Unless somebody wants to pay me to do it.  This gun for hire.  I can be had.  :-) Skepticism is the first step on the road to truth. -- Denis Diderot The ability to model a local political-economy has been around for a long time: SimCity does as good a job as one can expect given their limitations and methodology,.. Perhaps a model national economy, or series of national economies, could be constructed which consist significantly of the interactions of the city models, models of national social and economic policies, plus models of agriculture, mining and other resource extraction or harvesting activities.  Etc., etc. As the Dutch said while fighting the Spanish: "It is not necessary to have hope in order to persevere." There is an interesting problem modeling resource extraction.  Resource extraction essentially it breaks into renewables and one-time consumables with the latter breaking down into recyclables, metals in particular, and use-once, e.g.,oil-as-energy.   There's also two price points for the extractors:  Rape & Pillage, where the extractors do not have to pay for the ecological damage they create  Pay-for-It, where the extractors have to pay for the damage The first is what we got.  The second is what we should be doing and, in some cases, is where we're headed. So, how you program/process resource extraction depends on if the Model is what-we-got XOR "The Economy." Skepticism is the first step on the road to truth. -- Denis Diderot There's also two price points for the extractors Creating a user choice or a continuum of choices between those two poles as a user option would rather quickly get the idea across to users in a city-state-with -hinterland basic Sim type game, especially with quarterly or yearly turn times.  Some versions of Railroad Tycoon modeled timber, coal, petroleum and uranium, along with associated industries, as available resource-industrial exploitation opportunities for the railroads.  Something like that could be incorporated, along with water and road transport.  Experienced game developers could probably produce addictive type games that would drive home the implications of some of the choices.  Call the game "Can you save the world?" As the Dutch said while fighting the Spanish: "It is not necessary to have hope in order to persevere." Model ≠ Game An accurate Economic Model would require hardware components that are not generally available.  Get around this by having the system internet 'savvy.'  Problem: this implies the Model will spend its time running -- let me put it -- "naive" simulations at the expense of more "interesting" ones.  For any given value of "naive" and "interesting." It would also require running and managing a software system beyond the sophistication of the average computer user.  There's no getting around this one unless a 'brain damaged' version is released to the general public. Skepticism is the first step on the road to truth. -- Denis Diderot Were the system to incorporate a group of analog computers it would inherently require specialized hardware, anyway, and would be used for "serious" work by business, government and academia.  Game type applications would need to be rather restricted in scope, all digital but could be sold cheaply.  The idea there would be to get certain ideas across to the user with a sledge hammer.  Naturally that usually mangles something. Your concerns regarding patents agree with my own experience.  And after you get one, it is only a license to sue.  Copyright may be more useful.  Putting patentable ideas firmly in the public domain is probably the best approach there.  Aside from possible profits I find the whole idea fascinating.  I only whish I were better able to contribute on the programming side.  But that is a hopeless prospect at this point in my life. As the Dutch said while fighting the Spanish: "It is not necessary to have hope in order to persevere." This thing is a Blue Sky pipe-dream. My informed WAG is a project of this magnitude would require serious funding: at least in the $25 to$50 million range.  (Hardware engineering is incredibly expensive.)  And with a small chance of re-couping the $injected. Certainly no private funding would be forthcoming and we're not USDA Stamp-of-Approval Serious People® so government funding is out as well. It might re-coup the money in spin-offs: games, books, seminars, & etc., but "might" cuts no checks. Skepticism is the first step on the road to truth. -- Denis Diderot I am not sure what you are saying (in general) about modeling an 'economy': that it is impossible, too expensive, or something else? Economies are modeled - they have to be modeled in some way, for governance. That is why there are national budgets, and offices of budget and management. Now, these models may be imprecise, contentious or too narrow, but they are used to predict the effect of changes in the rules of different types of transactions of citizens, companies, organizations, government and 'external transactions' (i.e. how these rules might interact with the rules of other budget systems in other nations). However 'inaccurate' such national budgets may be, it is correct to say that building a parallel one would an expensive project. However, the Dutch government is already considering the use of XBRL to 'code' their national budget in such a way as to allow any citizen to play 'what if' with it. The mandated use of XBRL would also bring transparency to corporate accounting, because the tax authorities could also play 'what if' with those accounts. Stress testing, I suppose you'd call it ;-) An explanation of XBRL is here. There's an XBRL conference in Paris 23-25 June. So there may be a 'game' to be played in the future that does not require the building of the model. TBG's 'game', as I understand it, is not about producing a perfect, but coarse, real 'what if' modeled budget, but a simulation of the budget decision processes so that citizens can better understand the processes and thus vote in a more informed manner. Lemonade Stand 3.0. You can't be me, I'm taken For a brief while in a prior incarnation I "managed" (more like conceived of and rode herd over) a project to develop a digital editor for the 3M Digital Audio Mastering System, a 32 track digital audio tape recorder. The overall system design for the system had been developed by the BBC to allow digital transmission of 16 bit, 50 KHz program streams over the air to remote transmitters. It employed a Hamming Code to protect against lightning induced drop outs. This involved taking two words a chosen interval apart, adding them into a check-sum and recording that check-sum down stream by the same interval. As a consequence one could punch holes in the tape about the size of a paper punch and not loose any information. The problem was that a tape splice edit produced a giant pop. I spotted a TRW 16 bit multiply-accumulator that could operate with a 110 nano-second period. Back-of-the-envelope calculations showed that one such device was sufficient to perform more than 32 channels worth of such operations in the 20 micr-second period of one 16 bit sample period. "We" got a contract to develop such an editor from 3M and delivered a working system in about one year. I hired one hardware and three software engineers on a part time consulting basis. The programmers all worked for a local microprocessor based manufacturer of telephone transmission test equipment. My hardware guy was a MIT EE who worked for aerospace. This was an evenings and week-ends project and all kept their day jobs. This was, IIRCC, 1978, and we paid our consultants$35/hr.  It was exhausting for them but they more than doubled their income for that year. Me, not so much.  I should have demanded their deal! We were using Motorola 6800 microprocessors and I had partitioned the system into four separate, interacting processor systems.  One system consisted of the edit hardware, which executed digital cross-fades using stored PROM coefficient tables, MSI logic and the TRW chip, another was the tape machine controller, another was a SMPTE time code reader and sync machine that could synchronize two machines for assembly edits and the fourth was the user interface machine. The actual cost of the whole project was around $500,000, including hardware. 3M eventually supplied us with recorders on which to test our system. I don't know what "we" charged 3M. I was "just" the project manager. I understood the application and conceived the over all approach and then got burned for my efforts. If "we" charged 3M$1,000,000 at that time it would not have been too outrageous. In today's environment there are probably lots of guys and gals who would be quite happy to make $80,000 for a year's work and who have the knowledge but not the day job. Make that$100,000, including paid medical of the caliber of Kaiser and one would have choices.  One or two hardware guys, depending on whether one engineer could adequately handle both the digital hardware and the analog function modules, two or three programmers and overhead and we would be under $2,000,000/year for development of the full blown "professional" system. A secondary focus on a consumer type game could produce revenue within a year or so. I have always liked blue skys. As the Dutch said while fighting the Spanish: "It is not necessary to have hope in order to persevere." There may be some mileage in developing an Open Business model at the same time. You'd need some start up cash - probably$500,000 or so - but you certainly wouldn't need $50m. Second Life charges outrageous land rents - a lot of people are paying >$200/mth for a not very powerful virtual server. I don't think cash is the issue. Nor are processor cycles - clouding should give you all the cycles you want. It might not give them instantly, but some lag wouldn't necessarily be a huge problem. ¹  Unless somebody wants to pay me to do it.  This gun for hire.  I can be had.  :-)Another reason to copyright and patent as possible.  Prior to release as open source why not provide fair compensation to the programmers?   As the Dutch said while fighting the Spanish: "It is not necessary to have hope in order to persevere." Can't patent software in the EU. Patents and copyrights are essentially unenforceable against Chinese companies in China. Before you get a patent in the US the USPTO publishes the patent on the internet for "comments and objections."  DeepPockets, Inc. regularly scans the site files objections using their patent staff, and forces the applier into a complex, expensive, legal battle which small companies can't afford.  A filed patent that is rejected by the USPTO is automatically in the public domain, meaning DeepPockets, Inc. gets to use it for the cost of keeping their patent staff around, which they have to do anyway. In short: if you have a good idea, Shut Up. Skepticism is the first step on the road to truth. -- Denis Diderot Um, why not set up one of Chris Cook's LLPs? If the thing makes money you get compensated, if it doesn't nobody loses anything but their time... Chris, can we have the story of that film you made with an LLP, again? Most economists teach a theoretical framework that has been shown to be fundamentally useless. -- James K. Galbraith Well, you can find it in here The "Common Source" approach I advocate is outlined there, and that is to put IP into the hands of a Custodian, and then to encapsulate all the rights of use and usufruct/use value within a Master Partnership agreement or protocol. The outcome is new form of co-ownership or Common Source property right of indefinite duration. Essentially the entire Creative Commons property relationship is then encapsulated within a Corporate personality. The result is an enterprise model which is both Open and Closed. It is Closed because only LLP members can use it, and  Open because anyone who consents to the agreement may be a member. It's worth pointing out that if all the stakeholders are members, then limitation of liability is unnecessary. That is why I wax lyrical about the LLP being the first example of an Open Corporate. (Cost)-Free limitation of liability is IMHO a very dubious add-on in terms of the public interest. "The future is already here -- it's just not very evenly distributed" William Gibson The easiest way to add complexity to the Fitness Landscape is to build some - say 10 to 20 - analog computers, cross-linkable with A/D and D/A conversion to/from the neural net to simulate continuous data streams which the neural net nodes have to 'chunk' - both for data processing and across time. Analog computers with >90dB dynamic range working with 16 bit A/Ds and D/As could give really fine levels of resolution and speed.  And your point about non-Van Neumann modeling systems is important.  The ability of human controllers to make inputs to either the analog or digital models based on intuition or just trying to crash the system would also be valuable. This might be interesting enough for someone to write a grant for funding the initial phases.  What about Bob? :-) As the Dutch said while fighting the Spanish: "It is not necessary to have hope in order to persevere." The neat thing, for us, about analog computers is they are "noisy."  Even the A/D and D/A conversions interject noise.  Great!  We're dealing with a thundering horde of lousy data and there's little point in pretending any different. Skepticism is the first step on the road to truth. -- Denis Diderot I know someone doing research into analog computers. They're much, much faster than anything that would be needed here. And - using his approach - very cheap too. For a long time I've toyed with the idea of buiding an analog computer... Most economists teach a theoretical framework that has been shown to be fundamentally useless. -- James K. Galbraith The ability to model a local political-economy has been around for a long time: SimCity does as good a job as one can expect given their limitations and methodology. And since,On January 10, 2008 the SimCity source code was released under the free software GPL 3 license under the name Micropolis. that's where I'd start my research.  If I was going to do something ... which I'm not¹.  How clean is the SimCity codebase? Most economists teach a theoretical framework that has been shown to be fundamentally useless. -- James K. Galbraith How clean is the SimCity codebase?And how well annotated is it? As the Dutch said while fighting the Spanish: "It is not necessary to have hope in order to persevere." According to the wiki article the Micropolis codebase is a reworked C++ version of the original C code of the X11 port of SimCity... So, there is hope... Most economists teach a theoretical framework that has been shown to be fundamentally useless. -- James K. Galbraith predicted both the stock market slide and the financial debacle some months ahead of the events. You can read Williams' predictions without subscription in the archives on his web-site,  www.shadowstats.com   Many of us here at ET and elsewhere recognized the 'signs' ahead of events, as well. I switched my 401k to a foreign-stock fund in time to enjoy a little bounce there, then into a gov't bonds/cash fund in time to save my financial butt - mostly because of discussions here and other similarly oriented blogs. I'm not saying that I don't think that there are Black Swans, but I find the world a whole lot more predictable than not. Marx and Henry George may have had different perspectives, but I can still find salience in both. In any case the economic 'game' can be 'won' in a variety of ways, but I want to found the system on both moral and practical bedrock: Solidarity and Sustainability. Don't we all? I think that, if we start with that, we'll do well enough, whether the result is Chris' system, socialism, entrepreneurial capitalism, or some combination (which is my position). And with respect to this conversation I think that choosing some systemic approach allows us to 'cheat' the myriad curves into a path/math that is linear enough to both create a plan and to measure progress. As to the current economic situations around the world - and the overall interplay of such - it's not that tricky. If we add the analyses and critiques of Krugman, Williams, Jerome, and most of the rest of us here - which share many elements - we cover the picture well enough. paul spencer I would be a fan of making computer models/simulations of the economy more generally accessible, if only to better inform political debate.  The key thing to be aware of, howeverare the assumptions which underlie the model - as these can bias the entire result sets towards some options and against others.The key to this would be having the code open source AND having an extensively and collaboratively annotated code.  The point of the endeavor would be to examine, test and refine the assumptions and how they are implemented in code. As the Dutch said while fighting the Spanish: "It is not necessary to have hope in order to persevere." Something in the LISP family... (before you roll your eyes at this, consider it includes LOGO and R...) Most economists teach a theoretical framework that has been shown to be fundamentally useless. -- James K. Galbraith Ah, memories.  All your GNUS R belong to us. Now where are we going and what's with the handbasket? I have no doubt that the problems are non-trivial.  An earlier vision of the same type of approach was Kenneth Arrow's vision of modeling societies, sometimes referred to as "Peasants under Glass" back in the '90s.  Were it easy it would have been done by now. I don't think that a complete solution is necessary.  The one thing I remember from college is that complex equations are most easily solved at boundaries.  In the case of Raleigh's wave equation for acoustics, which involves the interrelations of pressure variations and the variations of air movement or "volume velocity," this is typically done at a rigid boundary or wall at which volume velocity is zero.  An amazing number of practical measurements and useful techniques come out of this simplifying assumption. A similar approach might prove fruitful in economic modeling.  What I had in mind was a system of interacting agents in which the A.I. agents automated certain aspects of their response with other agents but in which the controller for each agent could intervene on whatever basis seemed appropriate to that controller.  The goal would be to define sub-sets of assumptions that would tend towards stability under a wide range of perturbations, including the desire of individual agent controllers, or players, to win. In this fashion it may be possible to empirically move from very crude and simple systems to increasingly complex systems while maintaining stability.  In such a game I would at best be a mediocre player.   As the Dutch said while fighting the Spanish: "It is not necessary to have hope in order to persevere." What I had in mind was a system of interacting agents in which the A.I. agents automated certain aspects of their response with other agents but in which the controller for each agent could intervene on whatever basis seemed appropriate to that controller.  The goal would be to define sub-sets of assumptions that would tend towards stability under a wide range of perturbations, including the desire of individual agent controllers, or players, to win. Well I want a sound reproduction system with 100% accuracy.  So ... GET TO WORK, dammit! (LOL) Your requirement are easy to 'spec' but - alas - a wee tad harder to accomplish. ;-) Skepticism is the first step on the road to truth. -- Denis Diderot Well I want a sound reproduction system with 100% accuracy.  So ... GET TO WORK, dammit!Well, at least the initial phases of the "research", if carried out with a number of subjects, would be enjoyable, but the long term expenses would well exceed my budget, and my wife would object, to say the least. As the Dutch said while fighting the Spanish: "It is not necessary to have hope in order to persevere." And of course for the audio reproduction system, in analog systems at least, been there, done that to a significant degree. As the Dutch said while fighting the Spanish: "It is not necessary to have hope in order to persevere." Okay, guys, I'm hijacking this subthread from this point downwards and sending it here. Most economists teach a theoretical framework that has been shown to be fundamentally useless. -- James K. Galbraith It might be important to brand, patent and copyright such software and commercialize it so as to make it more difficult to suppress or control by stasis seeking forces, or perhaps I am overly paranoid. You make the source code open source so the codebase cannot be hijacked. But what you want to prevent is for someone to use the code to set up their own alternative game and through more successful branding, drown out yours and subvert the political purpose of creating the game in the first place. Most economists teach a theoretical framework that has been shown to be fundamentally useless. -- James K. Galbraith # Top Diaries ## The Center has Fallen: Polarized Pluralism and Catalan... by ManfromMiddletown - Oct 20 by gmoke - Oct 7 ## Balanced Budget Amendment Task Force B.S. by ARGeezer - Oct 7 ## Explaining the EU to outsiders by Frank Schnittger - Sep 29 by DoDo - Oct 3 # Recent Diaries ## The Center has Fallen: Polarized Pluralism and Catalan... by ManfromMiddletown - Oct 20 by gmoke - Oct 7 ## Balanced Budget Amendment Task Force B.S. by ARGeezer - Oct 7 by DoDo - Oct 3 ## Vampire Squid and the Fed by Crazy Horse - Sep 29 ## Explaining the EU to outsiders by Frank Schnittger - Sep 29 ## A European perspective on Booman by Frank Schnittger - Sep 27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3570403754711151, "perplexity": 2001.8827533011163}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119649048.35/warc/CC-MAIN-20141024030049-00322-ip-10-16-133-185.ec2.internal.warc.gz"}
https://gitlab.mpi-sws.org/iris/iris/-/commit/a1579b6efda2d6f98d55f93374ab0869cfa0b653
Commit a1579b6e by Ralf Jung ### Be explicit about the CMRA on option parent dccb4153 Pipeline #2859 passed with stage in 9 minutes and 22 seconds \section{COFE constructions} \subsection{Trivial pointwise lifting} The COFE structure on many types can be easily obtained by pointwise lifting of the structure of the components. This is what we do for option $\maybe\cofe$, product $(M_i)_{i \in I}$ (with $I$ some finite index set), sum $\cofe + \cofe'$ and finite partial functions $K \fpfn \monoid$ (with $K$ infinite countable). \subsection{Next (type-level later)} Given a COFE $\cofe$, we define $\latert\cofe$ as follows (using a datatype-like notation to define the type): ... ... @@ -75,6 +80,16 @@ The composition and core for $\cinr$ are defined symmetrically. The remaining cases of the composition and core are all $\bot$. Above, $\mval'$ refers to the validity of $\monoid_1$, and $\mval''$ to the validity of $\monoid_2$. The step-indexed equivalence is inductively defined as follows: \begin{mathpar} \infer{x \nequiv{n} y}{\cinl(x) \nequiv{n} \cinl(y)} \infer{x \nequiv{n} y}{\cinr(x) \nequiv{n} \cinr(y)} \axiom{\bot \nequiv{n} \bot} \end{mathpar} We obtain the following frame-preserving updates, as well as their symmetric counterparts: \begin{mathpar} \inferH{sum-update} ... ... @@ -87,6 +102,16 @@ We obtain the following frame-preserving updates, as well as their symmetric cou \end{mathpar} Crucially, the second rule allows us to \emph{swap} the side'' of the sum that the CMRA is on if $\mval$ has \emph{no possible frame}. \subsection{Option} The definition of the (CM)RA axioms already lifted the composition operation on $\monoid$ to one on $\maybe\monoid$. We can easily extend this to a full CMRA by defining a suitable core, namely \begin{align*} \mcore{\mnocore} \eqdef{}& \mnocore & \\ \mcore{\maybe\melt} \eqdef{}& \mcore\melt & \text{If $\maybe\melt \neq \mnocore$} \end{align*} Notice that this core is total, as the result always lies in $\maybe\monoid$ (rather than in $\maybe{\maybe\monoid}$). \subsection{Finite partial function} \label{sec:fpfnm} ... ... Markdown is supported 0% or . You are about to add 0 people to the discussion. Proceed with caution. Finish editing this message first!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9975844025611877, "perplexity": 3438.5372237900933}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988986.98/warc/CC-MAIN-20210509122756-20210509152756-00140.warc.gz"}
https://www2.isye.gatech.edu/~dai/cap/gt-seminars/fallseminartop.html
# Probability Seminar ## Topics October 17, 1996 Y. L. Tong Georgia Tech #### Dimension-Reduction Inequalities for Exchangeable Random Variables, With Applications in Statistical Inference Exchangeable random variables make frequent appearances in probability and statistics, and play a central role in Bayes theory, multiple comparisons, reliability theory, and certain other applications. This talk is concerned with a class of dimension-reduction inequalities for exchangeable random variables with selected applications to statistical inference problems. The proof of the main theorem depends on a moment inequality and de Finetti's theorem, which states that exchangeable random variables are conditionally i.i.d. random variables. October 24, 1996 Doug Down Georgia Tech #### Stability and Monotone Properties of a Tandem Queueing Network under Window Flow Control In this talk a network under window flow control is studied. The system is modelled as a (tandem) queueing network with two types of sources, one uncontrolled exogenous traffic and the other controlled. Window flow control operates on the following principle: the controlled source cannot send more than K packets without receiving an acknowledgement from the destination. The situation of interest in this work is that of the flow control being active, in which case the system may be modelled as a network in which exogenous traffic traverses the system as before but the controlled source can be replaced by a closed loop of K packets. Service at each of the servers is assumed to be FIFO. The stability for the system is examined with an emphasis on the situation in which the network dynamics may be described by a Markov process. It is found that the system is stable under the usual load condition (service rate greater than arrival rate) on the exogenous traffic, and in particular is independent of the window size K. Monotonicity properties of certain quantities in the system are identified, which may have implications for further analysis. Finally, the case in which the arrival and service processes are simply assumed to be stationary will be examined. October 31, 1996 Tom Kurtz University of Wisconsin #### Martingale problems for partially observed Markov processes We consider a Markov process $X$ characterized as a solution of a martingale problem with generator $A$. Let $Y(t)=\gamma (X(t))$. Assuming that we observe $Y$ but not $X$, then the fundamental problem of filtering is to characterize the conditional distribution $\pi_t(\Gamma )=P(X(t)\in\Gamma |{\cal F}^Y_t)$. Under very general conditions, the probability measure-valued process $\pi$ can be characterized as a solution of a martingale problem. Applications of the general result include a proof of uniqueness for the Kushner-Stratonovich equation for the conditional distribution of a signal observed in additive white noise, proofs of Burke's output theorem and an analogous theorem of Harrison and Williams for reflecting Brownian motion, conditions under which $Y$ is Markov, and proofs of uniqueness for measure-valued diffusions. November 1, 1996 Reuven Y. Rubinstein Technion, Israel #### Optimization of Computer Simulation Models with Rare Events Discrete event simulation systems (DESS) are widely used in many diverse areas such as computer-communication networks, flexible manufacturing systems, project evaluation and review techniques (PERT), and flow networks. Because of their complexity, such systems are typically analyzed via Monte Carlo simulation methods. This talk deals with optimization of complex computer simulation models involving rare events. A classic example is to find an optimal (s,S) policy in a multi-item, multicommodity inventory system, when quality standards require the backlog probability to be extremely small. Our approach is based on change of the probability measure techniques, also called likelihood ratio (LR) and importance sampling (IS) methods. Unfortunately, for arbitrary probability measures the LR estimators and the resulting optimal solution often tend to be unstable and may have large variances. Therefore, choice of the corresponding importance sampling distribution -- and in particular of its parameters -- in an optimal way is an important task. We consider the case where the IS distribution belongs to the same parametric family as the original (true) one and use the stochastic counterpart method to handle simulation based optimization models. More specifically, we use a two-stage procedure: at the first stage we identify (estimate) the optimal parameter vector of the IS distribution, and at the second the optimal solution of the underlying constrained optimization problem. Particular emphasis will be placed on estimation of rare events and on integration of the associated performance function into stochastic optimization programs. Supporting numerical results are provided as well. November 7, 1996 Walter Philipp University of Illinois Urbana-Champaign #### Weak Dependence in Probability, Analysis, and Number Theory In this talk we survey some of the basic facts on weak dependence, some results on sums of lacunary trigonometric series and their application to harmonic analysis and probabilistic number theory. Also, we will mention some new results on the domain of partial attraction of phi-mixing random variables. The talk will be accessible to non-experts and graduate students. November 14, 1996 Raid Amin University of West Florida #### Some Control Charts Based on the Extremes Howell (1949) introduced a Shewhart-type control chart for the smallest and largest observations. He showed that the proposed chart was useful for monitoring the process mean and process variability, and it allowed specification limits to be placed on the chart. We propose an exponentially weighted moving average (EWMA) control chart which is based on smoothing the smallest and largest observations in each sample. A two-dimensional Markov chain to approximate the Average Run Length is developed. A design procedure for the MaxMin EWMA control chart is given. The proposed MaxMin EWMA chart shows which parameters have changed, and in which direction the change occurred. The MaxMin EWMA can also be viewed as smoothed distribution-free tolerance limits. It is a control procedure that offers excellent graphical guidance for monitoring processes. A modified (two-sided) MaxMin chart is also discussed. Numerical results show that the MaxMin EWMA has very good ARL properties for changes in the mean and/or variability. The MaxMin chart has already been implemented at a local company with success. November 21, 1996 Serguei Foss Novosibirsk State University and Colorado State University #### Coupling and Renovation In the first part of the talk, we introduce notions of coupling (forward coupling) and strong coupling (backward coupling), and show the use of these notions in the stability study of Markov chains and of stochastically recursive sequences, and, in particular, in a simulation of the stationary distribution of a homogeneous discrete-time Markov Chain. In the second part of the talk, we consider the following problem. Let $Y \equiv Y_0 \equiv \{ X_n, n \geq 0 \}$ be a sequence of random variables. For $k=1,2, \ldots$, put $Y_k = \{ X_{k+n}, n \geq 0\}$ and denote by $P_k$ the distribution of $Y_k$. When does there exist a probability measure $P$ such that $P_k \to P$ in the total variation norm? December 5, 1996 Minping Qian Peking University, Beijing, China #### An accelerated algorithm of Gibbs sampling To overcome the difficulty of oscillation when the density of samples is peaky, a reversible calculation scheme is introduced. Theoretical discussion and calculation examples show that it does accelerate the calculations. January 10, 1997 Andre Dabrowski University of Ottawa #### Statistical Analysis of Ion Channel Data Ion channels are small pores present in the outer membranes of most biological cells. Their use by those cells in generating and transmitting electrical signals has made their study of considerable importance in biology and medicine. A considerable mathematical literature has developed on the analysis of the alternating on/off current signal generated by ion channels in "patch-clamp" experiments. After a brief decription of patch-clamp experiments and their associated data, we will provide an overview of the major approaches to the statistical analysis of current traces. The renewal-theoretic approach of Dabrowski, McDonald and Rosler (1990) will be described in greater detail, and applied to the analysis of data arising from an experiment on stretch-sensitive ion channels. January 16, 1997 Christian Houdre Georgia Tech #### An Interpolation Formula and Its Consequences We present an interpolation formula for the expectation of functions of Gaussian random vectors. This is then applied to present new correlation inequalities, comparison theorems and tail inequalities for various classes of functions of Gaussian vectors. This approach can be extended to the infinitely divisible and the discrete cube cases. January 30, 1997 Ming Liao Auburn University #### L'evy processes on Lie groups and stability of stochastic flows We consider stochastic flows generated by stochastic differential equations on compact manifolds which are contained in finite dimensional Lie transformation groups. Using a result for limiting behavior of L'evy processes on Lie groups, we can decompose such a stochastic flow as a product of the following three transformations: (1) a random "rotation" which tends to a limit as time goes to infinity; (2) an asymptotically deterministic flow; (3) another random "rotation". Using this decomposition, we may describe the random "sinks" and "sources" of the stochastic flow explicitly. Examples of stochastic flows on spheres will be discussed. February 6, 1997 Dana Randall Georgia Tech #### Testable algorithms for generating self-avoiding walks We present a polynomial time Monte Carlo algorithm for almost uniformly generating and approximately counting self-avoiding walks in rectangular lattices. These are classical problems that arise, for example, in the study of long polymer chains. While there are a number of Monte Carlo algorithms used to solve these problems in practice, these are heuristic and their correctness relies on unproven conjectures. In contrast, our algorithm relies on a single, widely-believed conjecture that is simpler than preceding assumptions, and, more importantly, is one which the algorithm itself can test. Thus our algorithm is reliable, in the sense that it either outputs answers that are guaranteed, with high probability, to be correct, or finds a counterexample to the conjecture. (Joint work with Alistair Sinclair.) February 13, 1997 Indiana University #### Models combining group symmetry and conditional independence in a multivariate normal distribution Three of the most important concepts used in defining a statistical model are independence, conditional distributions, and symmetries. Statistical models given by a combination of two of these concepts, conditional distributions and independence, the so-called conditional independence models, have received increasing attention in recent years. The models are defined in terms of directed graphs, undirected graphs or a combination of the two, the so-called chain graphs. This paper combines conditional independence (CI) restrictions with group symmetry (GS) restrictions to obtain the group symmetry conditional independence (GS-CI) models. The group symmetry models and the conditional independence models are thus special cases of the GS-CI models. A complete solution to the likelihood inference for the GS-CI models is presented. Special examples of GS models are Complete Symmetry, Compound Symmetry, Circular Symmetry, Complex Normal Distributions, Multivariate Complete Symmetry, Multivariate Compound Symmetry, and Multivariate Circular Symmetry. When some of these simple GS models are combined with some of the simple CI models, numerous well-behaved GS-CI models can be presented. February 27, 1997 Dimitris Bertsimas Sloan School, MIT #### Optimization of multiclass queueing networks via infinite linear programming and singular perturbation methods We propose methods for optimization of multiclass queueing networks that model manufacturing systems. We combine ideas from optimization and partial differential equations. The first approach aims to explore the dynamic character of the problem by considering the fluid model of the queueing network. We propose an algorithm that solves the fluid control problem based on infinite linear programming. Our algorithm is based on nonlinear optimization ideas, and solves large scale problems (50 station problems with several hundred classes) very efficiently. The second approach aims to shed light on the question of how stochasticity affects the character of optimal policies. We use singular perturbation techniques from the theory of partial differential equations to obtain a series of optimization problems, the first of which is the fluid optimal control problem mentioned in the previous paragraph. The second order problem provides a correction to the optimal fluid solution. This second order problem has strong ties with the optimal control of Brownian multiclass stochastic networks. We solve the problem explicitly in many examples and we see that the singular perturbation approach leads to insightful new qualitative behavior. In particular, we obtain explicit results on how variability in the system affects the character of the optimal policy. March 5, 1997 Paul Glasserman #### Importance Sampling for Rare Event Simulation: Good News and Bad News Precise estimation of rare event probabilities by simulation can be difficult: the computational burden frequently grows exponentially in the rarity of the event. Importance sampling --- based on applying a change of measure to make a rare event less rare --- can improve efficiency by orders of magnitude. But finding the right change of measure can be difficult. Through a variety of examples in queueing and other contexts, a general strategy has emerged: find the most likely path to a rare event and apply a change of measure to follow this path. The most likely path is found through large deviations calculations. The first part of this talk reviews positive results that support this strategy and examples of its potential for dramatic variance reduction. The second part shows, however, that the same approach can be disastrous even in very simple examples. For each negative example, we propose a simple modification that produces an asymptotically optimal estimator. March 13, 1997 Hayriye Ayhan Georgia Tech #### On the Time-Dependent Occupancy and Backlog Distributions for the $GI / G / \infty$ Queue An examination of sample path dynamics allows a straightforward development of integral equations having solutions that give time-dependent occupancy and backlog distributions (conditioned on the time of the first arrival) for the $GI/G/\infty$ queue. These integral equations are amenable to numerical evaluation and can be generalized to characterize $GI^X/G/ \infty$ queue. Two examples are given that illustrate the results. April 17, 1997 Andrew Nobel University of North Carolina, Chapel Hill #### Adaptive Model Selection Using Empirical Complexities We propose and analyze an adaptive model selection procedure for multivariate classification, which is based on complexity penalized empirical risk. The procedure divides the available data into two parts. The first is used to select an empirical cover of each model class. The second is used to select from each cover a candidate rule with the smallest number of misclassifications. The final estimate is chosen from the list of candidates in order to minimize the sum of class complexity and empirical probability of error. A distinguishing feature of the approach is that the complexity of each model class is assessed empirically, based on the size of its empirical cover. April 24, 1997 University of North Carolina, Chapel Hill and Technion -- Israel Institute of Technology #### An Introduction to Superprocesses A superprocess is a measure valued stochastic process used for modelling, among other things, infinite density systems of particles undergoing random motion and random branching. They can be studied either via the general theory of Markov processes, stochastic partial differential equations, or martingale problems. In this talk I shall try to provide an introduction to superprocesses for the uninitiated, describing their basic structure, some basic results, and some interestimg open questions. The talk will be followed by a 15 minute movie for anyone who wishes to stay. May 1, 1997 Alex Koldobsky University of Texas at San Antonio #### More on Schoenberg's problem on positive definite functions In 1938, Schoenberg posed the following problem: for which $p>0$ is the function $exp(-\|x\|_q^p)$ positive definite. The solution was completed in 1991, and since then there have appeared a few more proofs all of which were quite technical. We present a new proof which significantly generalizes the solution and, in a certain sense, clears things up. This proof is based on extending the Levy representation of norms to the case of negative exponents. We also show the connections between Schoenberg's problem and isotropic random vectors, and apply our results to inequalities of correlation type for stable vectors. May 8, 1997 Bok Sik Yoon Hong-Ik University, Seoul, Korea & Georgia Tech #### QN-GPH Method for Sojourn Time Distributions in General Queueing Networks We introduce the QN-GPH method to compute the sojourn time distributions in non-product form open queueing networks. QN-GPH is based on GPH semi-Markov chain modelling for the location process of a typical customer. To derive the method, GPH distribution, GPH/GPH/1 queue, and GPH semi-Markov chains are briefly explained and a seemingly efficient method computing the transition function and first passage time distributions in the GPH semi-Markov chain is developed. Numerical examples in the area of telecommunication are given to demonstrate the accuracy of the method. The QN-GPH method seems to be computationally affordable tool for delay analysis in various manufacturing systems or computer and communication systems. May 15, 1997 Jan Rosinski University of Tennessee, Knoxville #### Problems of unitary representations arising in the study of stable processes Study of different classes of stable processes, such as stationary, self-similar, stationary increment, isotropic, etc., leads to the problem of obtaining explicit forms for unitary representations on L^p spaces, which can be used for a classification of stable processes. This approach is necessitated by the lack of a satisfactory spectral theorem when p < 2. The talk will survey some results in this area and present some open problems. May 22, 1997 Robert Cooper Florida Atlantic University #### Polling Models and Vacation Models in Queueing Theory: Some Interesting and Surprising Stuff A polling model is used to represent a system of multiple queues that are attended by a single server that switches from queue to queue in some prescribed manner. These models have many important applications, such as performance analysis of computer-communication networks and manufacturing systems, and they tend to be quite complicated. A vacation model describes a single-server queue in which the server can be unavailable for work (away on "vacation") even though customers are waiting. Some vacation models exhibit a "decomposition," in which the effects of the vacations can be separated from the effects of the stochastic variability of the arrival times and the service times. In a polling model, the time that the server spends away from any particular queue, serving the other queues or switching among them, can be viewed as a vacation from that queue. Adoption of this viewpoint greatly simplifies the analysis of polling models. Recently, it has been found that polling models themselves enjoy an analogous decomposition with respect to the server switchover times (or, in the manufacturing context, setup times), but for apparently different reasons. Furthermore, it has recently been discovered that some polling models exhibit counterintuitive behavior: when switchover times increase, waiting times decrease; or, equivalently, in the parlance of manufacturing, WIP (work in process) can be decreased by artificially increasing the setup times. In this talk we give an overview of polling and vacation models, including some historical context. Also, using decomposition we "explain" the counterintuitive behavior, and identify it as a hidden example of the well-known renewal (length-biasing) paradox. The talk will emphasize conceptual arguments rather than mathematical detail, and should be of interest to a general audience. Unless otherwise noted, the seminar meets Thursdays at 3 PM in Skiles, Room 140. For further information, contact Jim Dai ([email protected]) or Richard Serfozo ( [email protected]). May 29, 1997 Takis Konstantopoulos University of Texas, Austin #### Distributional Approximations of Processes, Queues and Networks under Long-Range Dependence Assumptions In this talk we discuss the issue of modeling and analysis of stochastic systems under the assumption that the inputs possess long-range dependence. The hypothesis is based on experimental observations in high-speed communication networks that have motivated a large body of research in the recent years. After briefly reviewing typical experiments, models, and theoretical results on performance, we present a detailed limit theorem for a class of traffic processes possessing a weak regenerative structure with infinite variance cycle times and burstiness constrained'' cycles. The distribution of the approximating process is characterized and found to be of Levy-type with a stable marginal distribution whose index is the ratio of a parameter characterizing the tail of cycle times and a parameter representing the asymptotic growth rate of traffic processes. We also discuss queueing analysis for Levy networks. Finally, we comment on the matching of distributions of both arrivals and queues with those observed in practice. June 5, 1997 Alan F. Karr National Institute of Statistical Sciences #### Does Code Decay? Developers of large software systems widely believe that these systems _decay_ over time, becoming increasingly hard to change: changes take longer, cost more and are more likely to induce faults. This talk will describe a large, cross-disciplinary, multi-organization study, now in its first year, meant to define, measure and visualize code decay, to identify its causes (both structural and organizational), to quantify effects, and to devise remedies. Emphasis will be on the code itself and its change history as statistical data, and on tools to describe and visualize changes. Last updated: May 31, 1997 by J. Hasenbein ( [email protected])
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8042525053024292, "perplexity": 783.5280054246175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891811795.10/warc/CC-MAIN-20180218081112-20180218101112-00273.warc.gz"}
https://cob.silverchair.com/jeb/article/208/6/1109/9433/Temperature-alters-the-respiratory-surface-area-of
We have previously found that the gills of crucian carp Carassius carassius living in normoxic (aerated) water lack protruding lamellae,the primary site of O2 uptake in fish, and that exposing them to hypoxia increases the respiratory surface area of the gills ∼7.5-fold. We here examine whether this morphological change is triggered by temperature. We acclimated crucian carp to 10, 15, 20 and 25°C for 1 month, and investigated gill morphology, oxygen consumption and the critical oxygen concentration at the different temperatures. As expected, oxygen consumption increased with temperature. Also at 25°C an increase in the respiratory surface area, similar to that seen in hypoxia, occurred. This coincided with a reduced critical oxygen concentration. We also found that the rate of this transformation increased with rising temperature. Goldfish Carassius auratus, a close relative to crucian carp, previously kept at 25°C,were exposed to 15°C and 7.5°C. At 7.5°C the respiratory surface area of its gills was reduced by development of an interlamellar cell mass as found in normoxic crucian carp kept at 10-20°C. Thus, both species alter the respiratory surface area in response to temperature. Rather than being a graded change, the results suggest that the alteration of gill morphology is triggered at a given temperature. Oxygen-binding data reveal very high oxygen affinities of crucian carp haemoglobins, particularly at high pH and low temperature, which may be prerequisites for the reduced gill respiratory surface area at low temperatures. As ambient oxygen and temperature can both induce the remodelling of the gills, the response appears primarily to be an adaptation to the oxygen demand of the fish. Crucian carp and goldfish, two closely related species of the same genus Carassius, exhibit striking capacity of coping with low levels of oxygen and a wide range of ambient temperatures. Both species are anoxia tolerant and able to convert lactate to ethanol during severe hypoxia and anoxia, thus avoiding acidosis (Johnston and Bernard, 1983; Shoubridge and Hochachka, 1980; Shoubridge and Hochachka,1983). Although this mechanism enables them to avoid lactate self-pollution during anoxia, the release of ethanol to the water is energetically very costly, due to the loss of this energy-rich hydrocarbon. Since their anoxic survival time is dependent on their glycogen stores(Nilsson, 1990), it must be advantageous being able to postpone the activation of anaerobic ethanol production, and rely on their aerobic metabolism, for as long as possible. Being freshwater fish, crucian carp and goldfish are faced with a dilemma:they have to cope with a continuous ion loss and water influx over the respiratory surface area in the gills(Evans, 1979), but still maintain sufficient oxygen uptake. The water influx must be compensated by a large urine production resulting in an even greater loss of ions. These ion losses must be compensated by energetically demanding ion transport over the gills. Thus, being able to modulate the respiratory surface area in response to oxygen supply and demand should be of advantage. We have previously shown that crucian carp kept in normoxia at 8°C lack protruding lamellae, but if exposed to hypoxia, a morphological alteration is triggered resulting in protruding lamellae and a 7.5-fold increase of respiratory surface area (Sollid et al.,2003). This caused a fall in the critical oxygen concentration([O2]crit), i.e. the lowest ambient [O2]where the fish is able to sustain its resting oxygen consumption(O2). The gill remodelling is due to an induction of apoptosis and cell-cycle arrest in the mass of cells filling up the space between adjacent lamellae, causing this interlamellar cell mass (ILCM) to shrink. A reduction in respiratory surface area in normoxia should lead to lower water and ion fluxes and thus reduction of osmoregulatory costs. At the same time, the crucian carp's ability to maintain a sufficient rate of oxygen uptake without protruding lamellae indicates a very high oxygen affinity of its haemoglobin (Hb), which has remained to be studied. Fish are ectothermic organisms; hence increased temperature profoundly raises their metabolic rates. Increased temperature also decreases the amount of oxygen dissolved in the water. Temperature-related changes in metabolism are met with behavioural, respiratory, cardiovascular, hematological and biochemical adjustments (Aguiar et al.,2002; Burggren,1982; Butler and Taylor,1975; Caldwell,1969; Fernandes and Rantin,1989; Goldspink,1995; Houston et al.,1996; Houston and Rupert,1976; Maricondi-Massari et al., 1998). The responses to increased temperature may include air gulping, increased gill ventilation, increased lamellar perfusion, increased cardiac output, changes in Hb function and altered expression of metabolic enzymes. Studies related to gill morphology and temperature are scarce and only cover acute temperature changes(Hocutt and Tilney, 1985; Jacobs et al., 1981; Nolan et al., 2000; Tilney and Hocutt, 1987),which often reflect more pathophysiological responses that are not necessarily adaptive. Changes in Hb function could result from changes in the levels of erythrocytic effectors such as organic phosphates (ATP, often supplemented by guanosine triphosphates in fish) or changes in Hb isomorphs(Weber, 2000). In addition to the standard' electrophoretically anodic' Hb components that display pronounced Bohr shifts, some fishes (salmonids, catfishes and eels) also have electrophoretically cathodic' Hbs, that have lower Bohr shifts and show divergent phosphate sensitivities (which are insignificant in salmonids and large in eels and catfishes). Hb composition of goldfish (that is closely related to crucian carp) changes with temperature: electrophoresis reveals two isoHbs in fish acclimated to 2°C, and three isoHbs in fish acclimated to 20°C and 35°C (Houston and Cyr,1974). This modification also occurs in isolated cells and in hemolysates, suggesting that it is caused by altered aggregation of pre-existing subunits rather than de novo Hb synthesis(Houston and Rupert,1976). The aim of this study was to investigate if increased temperature, leading to an increased oxygen demand, can trigger the morphological response recently found in hypoxia-exposed crucian carp(Sollid et al., 2003). At our latitude the typical seasonal temperature range for the crucian carp habitat is 0°C to 25°C. We thus acclimated crucian carp to temperatures ranging from 10 to 25°C to examine the possible effects of changing oxygen demand on gill morphology. In addition goldfish were acclimated at 7.5, 15 and 25°C to see if the gill remodelling seen in crucian carp also is expressed in this closely related species when kept at low temperatures. Since goldfish normally are kept at room temperature, an ability to remodel the gills may not have been noticed. To identify adaptations in oxygen transport functions we also investigated Hb multiplicity in fish acclimated to the different temperatures, and measured the intrinsic oxygen-binding properties and effector sensitivities of crucian carp Hbs. ### Animals Crucian carp Carassius carassius L. (weighing 12.5-31.5 g; all adults) were caught in June 2003 in the Tjernsrud pond, Oslo community. They were kept on a 12 h:12 h L:D regime in tanks (∼100 fish per 500 l)continuously supplied with aerated and dechlorinated Oslo tapwater (10°C),and fed daily with commercial carp food (Tetra Pond, Tetra, Melle,Germany). Goldfish Carassius auratus L. (weighing 8.0-16.5 g; all adults),bred and cultivated in Singapore, were bought from a commercial wholesaler. They where kept in a tank (∼100 fish per 500 l) with aerated, ion strength adjusted to 500 μS cm-1 (dH-Salt, NOR ZOO, Bergen, Norway) and dechlorinated Oslo tap water (25°C) for 1 month before experiments. The light regime and feeding were the same as for crucian carp. ### Temperature acclimation Crucian carp were transferred to new holding tanks (∼10 fish per 25 l)held at 10°C, 15°C, 20°C and 25°C, respectively, and acclimated 1 month before respirometry experiments (see below). The fish were fed until 24 h before respirometry. Each fish was placed in the respirometer with a continuous flow of aerated water until 12 h before commencing measurements. To examine if gill morphology was affected by the respirometry,four fish from each group were sampled before and after respirometry and the left first and second gill arches were dissected out. As a control for possible effects of the confinement in the respirometer, crucian carp kept at 15°C were placed in the respirometer for 24 h and continuously supplied with aerated and dechlorinated Oslo tapwater. After exposures, the fish were killed with a sharp blow to the head. Goldfish were transferred to a new container (∼10 fish per 25 l) with ion strength adjusted, aerated and dechlorinated Oslo tapwater (25°C) for 1 month, whereafter the gills of four fish, were sampled. Subsequently, the water temperature in the container was reduced to 15°C. After 5 days at this temperature four additional fish were sampled. The temperature was finally reduced to 7.5°C and the gills of four fish were sampled after 5 days and 1 month at this temperature. The fish were fed during temperature acclimation. The fish were killed for dissection of the left first and second gill arches and treated as the crucian carp. ### Respirometry O2 during falling water oxygen concentration was measured with closed respirometry, and the[O2]crit was determined as described previously(Nilsson, 1992). The temperature in the 1 l respirometer was the same as the acclimation temperature. Oxygen levels in the respirometer were measured with an oxygen electrode (Oxi340i, WTW, Weilheim, Germany) and recorded on a laptop computer via an analog-digital converter (Powerlab 4/20, AD Instruments Ltd.,Oxon, UK). The fish were removed from the respirometer for dissection of gills when the recorded oxygen content became 0 mg O2 l-1. ### Scanning electron microscopy (SEM) The gill morphology of all groups was investigated as previously described(Sollid et al., 2003). In brief, gills were fixed in 3% glutaraldehyde in 0.1 mol l-1 sodium cacodylate buffer before dried, AuPd coated, and examined using a JSM 6400 electron microscope (JEOL, Peabody, USA). ### Hb oxygen binding IsoHb composition was probed using PhastSystem (Amersham Biosciences,Piscataway, NJ, USA) by isoelectrofocusing on polyacrylamide gels in the 5-8 pH range. The crucian carp had been acclimated for 1 month at 16°C or 26°C prior to blood samples. Crucian carp Hb for oxygen-binding studies was prepared from washed red cells as previously described (Weber et al., 1987). The Hb was stripped' of ionic effectors by column chromatography on Sephadex G25 Fine gel(Berman et al., 1971). Major isoHbs were separated using preparative isoelectric focusing using Pharmacia ampholytes (0.22% pH 5-7, 0.22% pH 6-8 and 0.11% pH 6.7-7.7). Retrieved pools were concentrated using Amicon Ultra-15 (molecular weight cut-off 10.000)filters. All Hb samples were subsequently dialyzed for at least 24 h against three changes of 10 mmol l-1 Hepes buffer containing 0.5 mmol l-1 EDTA. All preparation procedures were carried out at 0-5°C. Samples were frozen at -80°C and freshly thawed for subsequent analyses. O2 equilibrium measurements at different pH values and in the presence of 0.1 mol l-1 KCl were carried out using a modified gas diffusion chamber as previously detailed(Weber, 1981; Wells and Weber, 1989). ### Statistics All values are given as means ± s.e.m. and statistically significant differences were detected with a one-way ANOVA test with Tukey's test as post test using GraphPad InStat (GraphPad, San Diego, CA, USA). ### Morphology In crucian carp that were originally kept at 10°C, and then exposed to higher temperatures (15, 20 and 25°C), the gill morphology only changed in the 25°C group prior to the respirometry(Fig. 1a-c). Thus, the threefold increase of O2from 10 to 20°C, did not trigger a change of gill morphology(Table 1). Fig. 1. (a-f) Scanning electron micrographs from the second gill arch of crucian carp and goldfish kept at different temperatures. At 15°C (a) and 20°C(b) the crucian carp gills do not have protruding lamellae. However, after respirometry at 20°C (c) crucian carp gill filament exhibited protruding lamellae, a response probably induced by the hypoxic period in the respirometer. At 25°C (d) the crucian carp developed protruding lamellae in normoxia. Goldfish gills at 15°C (e) showed protruding lamellae;however after 5 days at 7.5°C (f) the gill morphology of goldfish started to resemble that of normoxic crucian carp at 10-20°C. Scale bar, 50μm. Fig. 1. (a-f) Scanning electron micrographs from the second gill arch of crucian carp and goldfish kept at different temperatures. At 15°C (a) and 20°C(b) the crucian carp gills do not have protruding lamellae. However, after respirometry at 20°C (c) crucian carp gill filament exhibited protruding lamellae, a response probably induced by the hypoxic period in the respirometer. At 25°C (d) the crucian carp developed protruding lamellae in normoxia. Goldfish gills at 15°C (e) showed protruding lamellae;however after 5 days at 7.5°C (f) the gill morphology of goldfish started to resemble that of normoxic crucian carp at 10-20°C. Scale bar, 50μm. Table 1. Respirometry data from the crucian carp GroupsTemp. (°C)O2 (mg kg−1 h−1)[O2]crit (mg 1−1)O2:[O2]crit 10 38.9±4.5b,c,d 1.43±0.13b,c,d 27.4±2.1b,c,d 15 88.2±7.9a,d 2.45±0.18a,c,d 36.0±2.0a,d 20 122.7±12.2a,d 3.55±0.29a,b 34.5±1.1a,d 25 209.5±15.1a,b,c 4.02±0.27a,b 52.1±1.6a,b,c GroupsTemp. (°C)O2 (mg kg−1 h−1)[O2]crit (mg 1−1)O2:[O2]crit 10 38.9±4.5b,c,d 1.43±0.13b,c,d 27.4±2.1b,c,d 15 88.2±7.9a,d 2.45±0.18a,c,d 36.0±2.0a,d 20 122.7±12.2a,d 3.55±0.29a,b 34.5±1.1a,d 25 209.5±15.1a,b,c 4.02±0.27a,b 52.1±1.6a,b,c Significant temperature-dependent changes in the mean values for rate of oxygen consumption(O2), critical oxygen tension ([O2]crit) and the ratio between the two at the different temperatures, are indicated by an ANOVA(P<0.0001). The superscripted letters (a,b,c,d) denote significant differences (P<0.05) between groups (A,B,C,D) within a variable. An increase of O2:[O2]critratio indicates an improvement of the capacity for oxygen uptake. Throughout closed respirometry, the oxygen tension in the respirometer drops, eventually to a level below the [O2]crit. Hence the fish will experience a hypoxic environment and finally anoxia (0 mg O2 l-1). At 8°C an increase of respiratory surface in hypoxia takes 3 days before it is pronounced(Sollid et al., 2003). The present results show, that this time period is dramatically reduced at higher temperatures. In the respirometer at 15 and 20°C the fish experienced hypoxia and anoxia on average 6 h before sampled. Crucian carp at 15°C(not shown) and 20°C (Fig. 1d) underwent the characteristic remodelling of their gills to increase the respiratory surface area during these few hours in the respirometer. This change was not due to confinement (not shown). In our previous study, morphometric measurements indicated a ∼7.5-fold increase in the lamellar area exposed to water in crucian carp kept in hypoxia(Sollid et al., 2003). The gill morphological changes of crucian carp kept at 25°C, and exposed to 15°C and 20°C in the respirometer in this study appeared to be identical in extent to those seen after hypoxia in our previous study. However, since the gills were only examined by SEM in the present study, no quantitative morphometrical measurements were attempted. Goldfish at 20°C had protruding lamellae (not shown), which were indistinguishable from those seen in the 15°C group(Fig. 1f). However, in goldfish exposed to 7.5°C, a clear change in the gill filament morphology occurred. This was clearly visible after 5 days (Fig. 1e) and no further changes were apparent after 1 month (not shown). The space between adjacent lamellae was partially filled with a cell mass, as seen in crucian carp, although slightly less pronounced, as the edges of the lamellae were still visible. ### Respiration The respirometry data for the crucian carp showed, as expected, that O2 increased with temperature (P<0.0001, Fig. 2A). A temperature rise from 10°C to 25°C increased the O2 more than fivefold, from 38.9±4.5 mg kg-1 h-1 to 209.5±15.1 mg kg-1 h-1 (P<0.001, Table 1). The increase of O2, from 10°C to 25°C, lead to an increase of [O2]crit from 1.43±0.13 kPa to 4.02±0.27 kPa (P<0.001, Table 1). However, there was a strikingly small increase in [O2]crit between 20°C and 25°C (Fig. 2B). This corresponds well with the transformation in gill morphology that occurred between these two temperatures (Fig. 1b,d). Fig. 2. (A-F) Respirometry data from the present study of crucian carp (left), and previous studies (right) on goldfish (Fry and Hart, 1948) and Atlantic cod(Schurmann and Steffensen,1997) showing the effect of temperature on O2 (A) and (B), the alteration of critical oxygen concentration ([O2]crit),in response to different temperatures (C) and (D), and how the different species alter their oxygen uptake capabilities at different O2 (E) and (F). Fig. 2. (A-F) Respirometry data from the present study of crucian carp (left), and previous studies (right) on goldfish (Fry and Hart, 1948) and Atlantic cod(Schurmann and Steffensen,1997) showing the effect of temperature on O2 (A) and (B), the alteration of critical oxygen concentration ([O2]crit),in response to different temperatures (C) and (D), and how the different species alter their oxygen uptake capabilities at different O2 (E) and (F). The relationship between O2 and[O2]crit in crucian carp(Fig. 2C) was similar to literature data for goldfish (Fig. 2F). Both species show relatively low[O2]crit at high temperatures, which indicates an improvement of their oxygen uptake capabilities that is likely to coincide with the remodelling of the gills. By contrast, the Atlantic cod(Schurmann and Steffensen,1997) shows a steady increase in [O2]critwith rising O2(Fig. 2F), indicating that this species is incapable of any major morphological or physiological adjustments to improve its O2 uptake capacity at high temperatures. Also,crucian carp showed lower [O2]crit values than the goldfish. For example at a O2 of approximately 85 mg kg-1 h-1, the [O2]crit values were 2.4 kPa and 3.9 kPa for crucian carp and goldfish, respectively. This indicates an ability of crucian carp to extract more oxygen from the surrounding water than goldfish. The Q10 was also similar between the two species (Table 2). However, in contrast to goldfish, crucian carp exhibited a higher Q10 between 20-25°C than between 15-20°C(Table 2). Table 2. Q10 values for crucian carp, goldfish and Atlantic cod Q10 regimesCrucian carpAtlantic codGoldfish 5-10°C  2.6 10-15°C 5.1 1.9 4.3 15-20°C 1.9  2.9 20-25°C 2.9  2.7 Q10 regimesCrucian carpAtlantic codGoldfish 5-10°C  2.6 10-15°C 5.1 1.9 4.3 15-20°C 1.9  2.9 20-25°C 2.9  2.7 Q10 values (i.e. the increase of O2 observed at 10°C higher temperatures, here given for 5°C intervals) for crucian carp (present study), goldfish (Fry and Hart, 1948) and Atlantic cod(Schurmann and Steffensen,1997). ### Hb and oxygen binding The thin-layer isoelectrofocusing of Hbs from fish acclimated to 14 or 26°C (Fig. 3) showed at least three major bands. Importantly no consistent differences were seen in the number or relative intensities of the bands between fish acclimated to the two temperatures. Fig. 3. Thin-layer isoelectrofocussing gels of Hbs from individual crucian carp specimens acclimated to either 14 or 26°C (as indicated) for 1 month,showing correspondence in isoHb compositions of the two groups. Fig. 3. Thin-layer isoelectrofocussing gels of Hbs from individual crucian carp specimens acclimated to either 14 or 26°C (as indicated) for 1 month,showing correspondence in isoHb compositions of the two groups. As shown (Fig. 4A,B),stripped crucian carp Hbs show an extremely high oxygen affinity(P50=0.8 and 1.8 at pH 7.6 at 10 and 20°C, respectively). The Bohr effect that approximates -0.7 at pH 7.0 decreases markedly with increasing pH and is virtually absent at pH above 7.7 at 20°C. Interestingly, cooperativity increased with decreasing pH over the entire range investigated (8.4-6.4, Fig. 4A), whereas n50 values at low pH fall to unity and lower (reflecting anticooperativity) in fish Hbs that express Root effects(Brittain, 1987). The oxygen affinities decrease with increasing temperature (in agreement with the exothermic nature of haem oxygenation). As expressed by the heats of oxygenation (ΔH=58 and 49 kJ mol-1 at pH 7.6 and 7.0,respectively) the temperature sensitivity of P50 decreases with pH. This correlates with the parallel increase in the Bohr effect and, thus, in the endothermic dissociation of the Bohr protons. By contrast, the ATP sensitivity of the Hb decreases with increasing pH(Fig. 4A) in accordance with the associated decrease in positive charge of the phosphate binding sites. Fig. 4. Oxygen-binding characteristics and isoHb differentiation of crucian carp Hb, measured in the presence of 0.1 mol l-1 KCl and 0.1 mol l-1 Hepes buffers. (A) Oxygen tensions and Hill's cooperativity coefficients at 50% saturation (P50 and n50 of stripped hemolysates and their pH dependence (Bohr plots) at 10°C (□) and 20°C (○) and of the lysate in the presence of saturating concentration of ATP (ATP/tetrameric Hb ratio, 9.6), (•), [haem], 0.50 mmol l-1. (B) Oxygen equilibrium curves at 10°C, 20°C and 20°C in the presence of saturating ATP (interpolated from data in A). (C)Isoelectric focusing profile, showing absorptions at 540 nm (○) and pH values at 25°C (▵) of eluted fractions, and the presence of three major (II, III and IV) and two minor (I and V) isoHbs. (D) Bohr plots of isoHbs I-IV, at 10 and 20°C. Fig. 4. Oxygen-binding characteristics and isoHb differentiation of crucian carp Hb, measured in the presence of 0.1 mol l-1 KCl and 0.1 mol l-1 Hepes buffers. (A) Oxygen tensions and Hill's cooperativity coefficients at 50% saturation (P50 and n50 of stripped hemolysates and their pH dependence (Bohr plots) at 10°C (□) and 20°C (○) and of the lysate in the presence of saturating concentration of ATP (ATP/tetrameric Hb ratio, 9.6), (•), [haem], 0.50 mmol l-1. (B) Oxygen equilibrium curves at 10°C, 20°C and 20°C in the presence of saturating ATP (interpolated from data in A). (C)Isoelectric focusing profile, showing absorptions at 540 nm (○) and pH values at 25°C (▵) of eluted fractions, and the presence of three major (II, III and IV) and two minor (I and V) isoHbs. (D) Bohr plots of isoHbs I-IV, at 10 and 20°C. Crucian carp red cells contain at least three major isoHbs (II, III and IV)and two minor ones (I and V). The elution profile(Fig. 4C) indicates relative abundance of 6% HbI, 27% HbII, 62% HbIII+IV and 5% HbV, and that Hbs I, II,III and IV are isoelectric at pH values of 6.7, 6.4, 6.9 and 5.8,respectively. All components exhibit similar, high oxygen affinities and similar Bohr effects (P50 of 1.5-1.8 mmHg at pH 7.6 and 20°C,and ν≅-0.30). These properties correspond with those of stripped hemolysates, indicating the absence of functionally significant interaction between the isolated components. The results show that both crucian carp and goldfish have the capacity to remodel their gills in response to temperature, hence altering the respiratory surface area. That hypoxia and high temperature induce apparently identical changes, i.e. causing the gill lamellae to protrude, suggests that the actual trigger is the oxygen demand of the fish. Another possibility is that high temperature and hypoxia independently trigger the transformation of the gills. The increase of respiratory surface area of crucian carp kept at 25°C coincided with a relatively low [O2]crit at this temperature, indicating an increase in the capacity for oxygen uptake(Fig. 2B,C). This was clearly reflected in the high O2:[O2]critratio of crucian carp kept at 25°C(Table 1). The relationship between temperature, O2 and[O2]crit of crucian carp observed in the present study resemble that found in a study on goldfish(Fig. 2A-C) in more than half a century ago (Fry and Hart,1948), which also showed an unexpectedly low[O2]crit at higher temperatures. These results can now be explained by the present finding that goldfish have protruding lamellae at high, but not low, temperatures. The reason why this transformation of gill morphology has not been observed in goldfish earlier is most likely that goldfish are traditionally kept at rather high temperatures, usually at room temperature. By contrast, Atlantic cod, a species that presumably does not have the ability to adjust the respiratory surface area to its oxygen needs, shows a linear relationship between [O2]crit and O2(Fig. 2C; see also Schurmann and Steffensen,1997). The results suggest that the oxygen demand of crucian carp does not trigger a remodelling of the gills unless the water temperature reaches 25°C,which is near the highest temperature that crucian carp normally experiences in its habitat for short periods during the summer months (J. S., G. E. N.,unpublished observations from the Oslo area). This indicates that gills with non-protruding lamellae are able to supply the crucian carp with sufficient oxygen to sustain aerobic metabolism at 20°C where its O2 is around 120 mg kg-1 h-1 (Table 1). The capacity to sustain a high O2 with a small respiratory surface area could rely on a high O2 affinity of the Hb. We measured oxygen affinity in the presence of 0.1 mol l-1 KCl, which decreases the oxygen affinity, mimicking the intracellular condition. Our data show that the high oxygen affinity (P50=1.8 mmHg at pH 7.7 and 20°C) increases markedly with falling temperature (P50=0.7 mmHg at 10°C) due to the pronounced temperature sensitivity at high in vivo pH (7.7), where the phosphate sensitivity is low(Fig. 4A,B). These properties that appear to characterise all major isoHbs(Fig. 4D) witness a high blood oxygen affinity as previously recorded in goldfish (P50=2.6 mmHg at pH 7.56 and 26°C; Burggren,1982). The remodelling of the gills appears to be rapid, since we did not observe any intermediate stages in crucian carp kept at 15°C or 20°C. Thus, it appears to be an on/off' response that is triggered either by hypoxia or high temperature, or maybe by their common denominator: an increased demand for oxygen uptake. When we reduced the acclimation temperature for goldfish to 7.5°C, they remodelled their gills to a state with almost no protruding lamellae. Since no intermediate stages were seen in goldfish gills during the 25°C to 15°C transfer, it seems, like in crucian carp, that this is anon/off' response that is triggered by either temperature or O2. Intriguingly, Isaia (1972)showed that the water flux across the goldfish gills increased more than five times from 5 to 25°C, which is much greater than would be expected from a diffusion process. It is tempting to suggest that at least part of this increased water flux was caused by an increase in the respiratory surface area. Indeed, Isaia (1972)suggested that the results must indicate either an important change in the branchial permeability during adaptation or the functioning of a greater respiratory surface at an increased temperature'. Moreover, it has been found that the common carp Cyprinus carpio, exposed to chronic hypoxia, is able to extract a higher percentage of the available oxygen than normoxic carp(Lomholt and Johansen, 1979). This could imply that the common carp has the ability to alter its respiratory surface area, possibly in a manner similar to that found in its cyprinid cousins: crucian carp and goldfish. Moreover, a capacity for gill remodelling to increase or decrease oxygen uptake and water fluxes may not be limited to cyprinids. A gill morphology characterised by thickened lamellae with epithelial cells being cuboidal or columnar instead of squamous has been seen in juvenile largemouth bass kept at over-wintering temperatures close to 4°C (Leino and McCormick,1993). The present data showed that the change from non-protruding to protruding lamellae occurs between 20 and 25°C in crucian carp, and between 7.5 and 15°C in goldfish. This may reflect species or population differences. Each year, the crucian carp we studied face a severely hypoxic and anoxic environment during the long winter period. Hence, they are more dependent on their glycogen stores than goldfish for survival. Thus, saving energy is likely to be a more critical feature for crucian carp. A small respiratory surface area over a large temperature interval will reduce osmoregulatory costs and, thereby, save energy that can be stored for surviving the long winter. There was also an apparent difference in the ability of these two species to handle soft water. The crucian carp population is well adapted to soft water, and do well in Oslo tapwater (20-50 μS cm-1),whereas goldfish did not do well (did not feed and were lethargic) in Oslo tapwater. Upon recommendation from the importer, we increased the conductivity in the goldfish water to 500 μS cm-1, which had a striking positive effect of the welfare of the goldfish. It is possible that these differences in water conductivity could be related to the difference seen in the temperature where gill remodelling takes place between the two species. However, at present we can only speculate. It has been found previously that crucian carp acclimated to hypoxia has higher O2 than normoxic crucian carp (Johnston and Bernard,1984). This increase of O2 could be due to increased ventilation rates and/or elevated osmoregulatory costs for having a larger respiratory surface area. Similarly, in the present study, the crucian carp displayed a larger difference in O2 between 20°C and 25°C (Q10=2.9) than between 15 and 20°C(Q10=1.9) (Table 2),which could be explained by the presence of protruding lamellae in the 25°C group, causing elevated osmoregulatory costs. By contrast,ectothermic animals generally show Q10 values that fall with increasing temperature (Prosser,1986; Withers,1992). Interestingly, between 10-15°C and 15-20°C, the Q10 values in goldfish (Fry and Hart, 1948) (Table 2) decrease less than in the crucian carp, which may be explained by the goldfish remodelling its gills at a lower temperature than the crucian carp. To conclude, the present study shows that both crucian carp and goldfish have the ability to remodel their gills by changing the size of the ILCM between the lamellae. Moreover, the response, which has previously shown to be triggered by hypoxia, can also be triggered by temperature. Thus, at high temperatures both goldfish and crucian carp display gills with clearly protruding lamellae. The remodelling of the gills to gain protruding lamellae is caused by increased apoptosis and cell-cycle arrest in the ILCM(Sollid et al., 2003). In the light of the present results, it is possible that the signals that trigger this change could include both hypoxia and high temperature, or their common denominator: the need for extracting more oxygen from the water. The ability to match the respiratory surface area to oxygen needs may provide a means of reducing water and ion fluxes and, thereby, the osmoregulatory costs. However,our observations suggest that this is a sharp on/off' response rather than a graded change, since no intermediate stages are seen except during the short transition from one state to the other. While this transition took several days in hypoxia at 8°C (Sollid et al.,2003), the present study showed that, at 20°C, it could be completed during the few hours that the fish were exposed to hypoxia in the respirometer. We are grateful to Anny Bang for assistance with the Hb-oxygen-binding measurements. We thank the Research Council of Norway and the Danish Natural Science Research Council for financial support. Aguiar, L. H., Kalinin, A. L. and Rantin, F. T.( 2002 ). The effects of temperature on the cardio-respiratory function of the neotropical fish Piaractus mesopotamicus. J. Therm. Biol. 27 , 299 -308. Berman, M., Benesch, R. and Benesch, R. E.( 1971 ). The removal of organic phosphates from hemoglobin. Arch. Biochem. Biophys. 145 , 236 -239. Brittain, T. ( 1987 ). The Root effect. Comp. Biochem. Physiol. 86B , 473 -481. Burggren, W. W. ( 1982 ). `Air gulping' improves blood oxygen transport during aquatic hypoxia in the goldfish Carassius auratus. Physiol. Zool. 55 , 327 -334. Butler, P. J. and Taylor, E. W. ( 1975 ). Effect of progressive hypoxia on respiration in dogfish (Scyliorhinus Canicula) at different seasonal temperatures. J. Exp. Biol. 63 , 117 -130. Caldwell, R. S. ( 1969 ). Thermal compensation of respiratory enzymes in tissues of goldfish. Comp. Biochem. Physiol. 31 , 79 -93. Evans, D. H. ( 1979 ). Fish. In Comparative Physiology of Osmoregulation in Animals ,vol. 1 (ed. G. M. O. Maloiy), pp. 305 Fernandes, M. N. and Rantin, F. T. ( 1989 ). Respiratory responses of Oreochromis niloticus (Pisces,Cichlidae) to environmental hypoxia under different thermal conditions. J. Fish Biol. 35 , 509 -519. Fry, F. E. J. and Hart, J. S. ( 1948 ). The relation of temperature to oxygen consumption in the goldfish. Biol. Bull. 94 , 66 -77. Goldspink, G. ( 1995 ). Adaptation of fish to different environmental-temperature by qualitative and quantitative changes in gene expression. J. Therm. Biol. 20 , 167 -174. Hocutt, C. H. and Tilney, R. L. ( 1985 ). Changes in gill morphology of Oreochromis mossambicus subjected to heat stress. Environ. Biol. Fish. 14 , 107 -114. Houston, A. H. and Cyr, D. ( 1974 ). Thermoacclimatory variation in the haemoglobin systems of goldfish(Carassius auratus) and rainbow trout (Salmo gairdneri). J. Exp. Biol. 61 , 455 -461. Houston, A. H., Dobric, N. and Kahurananga, R.( 1996 ). The nature of hematological response in fish - studies on rainbow trout Oncorhynchus mykiss exposed to stimulated winter,spring and summer conditions. Fish Physiol. Biochem. 15 , 339 -347. Houston, A. H. and Rupert, R. ( 1976 ). Immediate response of the hemoglobin system of the goldfish, Carassius auratus,to temperature change. 54 , 1737 -1741. Isaia, J. ( 1972 ). Comparative effects of temperature on sodium and water permeabilities of gills of a stenohaline freshwater fish (Carrassius auratus) and a stenohaline marine fish(Serranus scriba, Serranus cabrilla). J. Exp. Biol. 57 , 359 -366. Jacobs, D., Esmond, E. F., Melisky, E. L. and Hocutt, C. H.( 1981 ). Morphological changes in gill epithelia of heat stressed rainbow trout, Salmo gairdneri - Evidence in support of a temperature induced surface area change hypothesis. Can. J. Fish. Aquatic Sci. 38 , 16 -22. Johnston, I. A. and Bernard, L. M. ( 1983 ). Utilization of the ethanol pathway in carp following exposure to anoxia. J. Exp. Biol. 104 , 73 -78. Johnston, I. A. and Bernard, L. M. ( 1984 ). Quantitative study of capillary supply to the skeletal muscles of Crucian carp Carassius carassius L - effects of hypoxic acclimation. Physiol. Zool. 57 , 9 -18. Leino, R. L. and McCormick, J. H. ( 1993 ). Responses of juvenile largemouth bass to different pH and aluminum levels at overwintering temperatures - effects on gill morphology, electrolyte balance,scale calcium, liver glycogen, and depot fat. 71 , 531 -543. Lomholt, J. P. and Johansen, K. ( 1979 ). Hypoxia acclimation in carp - how it affects O-2 uptake, ventilation, and O-2 extraction from water. Physiol. Zool. 52 , 38 -49. Maricondi-Massari, M., Kalinin, A. L., Glass, M. L. and Rantin,F. T. ( 1998 ). The effects of temperature on oxygen uptake,gill ventilation and ECG waveforms in the nile tilapia, Oreochromis niloticus. J. Therm. Biol. 23 , 283 -290. Nilsson, G. E. ( 1990 ). Long term anoxia in crucian carp - Changes in the levels of amino acid and monoamine neurotransmitters in the brain, catecholamines in chromaffin tissue, and liver glycogen. J. Exp. Biol. 150 , 295 -320. Nilsson, G. E. ( 1992 ). Evidence for a role of GABA in metabolic depression during anoxia in crucian carp (Carassius carassius). J. Exp. Biol. 164 , 243 -259. Nolan, D. T., Hadderingh, R. H., Spanings, F. A. T., Jenner, H. A. and Bonga, S. E. W. ( 2000 ). Acute temperature elevation in tap and Rhine water affects skin and gill epithelia, hydromineral balance, and gill Na+/K+-ATPase activity of brown trout (Salmo trutta) smolts. Can. J. Fish. Aquatic Sci. 57 , 708 -718. Prosser, C. L. ( 1986 ). Temperature. In (ed. C. L. Prosser), pp. 260 -321. New York: John Wiley and Sons. Schurmann, H. and Steffensen, J. F. ( 1997 ). Effects of temperature, hypoxia and activity on the metabolism of juvenile Atlantic cod. J. Fish Biol. 50 , 1166 -1180. Shoubridge, E. A. and Hochachka, P. W. ( 1980 ). Ethanol - novel end product of vertebrate anaerobic metabolism. Science 209 , 308 -309. Shoubridge, E. A. and Hochachka, P. W. ( 1983 ). The integration and control of metabolism in the anoxic goldfish. Mol. Physiol. 4 , 165 -195. Sollid, J., De Angelis, P., Gundersen, K. and Nilsson, G. E.( 2003 ). Hypoxia induces adaptive and reversible gross morphological changes in crucian carp gills. J. Exp. Biol. 206 , 3667 -3673. Tilney, R. L. and Hocutt, C. H. ( 1987 ). Changes in gill epithelia of Oreochromis mossambicus subjected to cold shock. Environ. Biol. Fish. 19 , 35 -44. Weber, R. E. ( 1981 ). Cationic control of O2 affinity in lugworm erythrocruorin. Nature 292 , 386 -387. Weber, R. E. ( 2000 ). Adaptations for oxygen transport: lessons from fish hemoglobins. In Hemoglobin Function in Vertebrates, Molecular Adapation in Extreme and Temperate Environments (ed. G. Di Prisco, B. Giardina and R. E. Weber), pp. 22 -37. Milano, Italy: Springer-Verlag. Weber, R. E., Jensen, F. B. and Cox, R. P.( 1987 ). Analysis of teleost hemoglobin by Adair and Monod-Wyman-Changeux models. Effects of nucleoside triphosphates and pH on oxygenation of tench hemoglobin. J. Comp. Physiol. B 157 , 145 -152. Wells, R. M. G. and Weber, R. E. ( 1989 ). The measurement of oxygen affinity in blood and haemaglobin solutions. In Techniques in Comparative Physiology (ed. C. R. Bridges and P. J. Butler), pp. 279 -303. Cambridge:Cambridge Univerisity Press. Withers, P. C. ( 1992 ). Temperature. In Comparative Animal Physiology (ed. P. C. Withers), pp. 122 -191. New York: Saunders College Publishing.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7516364455223083, "perplexity": 10750.853391158156}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710918.58/warc/CC-MAIN-20221203011523-20221203041523-00730.warc.gz"}
https://siddharthasthana.dev/blog/gsoc%20week%202%20%E2%80%94%20striving%20to%20make%20the%20patch%20better!/
GSoC Week 2 — Striving to make the patch better! As I have mentioned in my previous blog, my commits had still a few rough edges, so I spent time fixing those this week! The first issue that I worked on was fixing the memory leak. Let me show the function where memory leak was: I thought that the strbuf that is created on line 3 is not getting freed and hence causing the memory leak. But, I was completely wrong. My mentor, Christian, helped in identifying the real cause of memory leak. The real reason for the memory leak was: • We passed commit_buf to the function replace_persons_using_mailmap(). Let’s say the address of the commit_buf was X. • Then on line 4, we add the commit_buf to a strbuf. This call allocates memory to store a copy of the commit_buf buffer inside the sb.buf field. Let’s say this newly allocated buffer has the address Y. • So, we re-write the ident lines on this new buffer (whose address is Y). • Further, on line 9, when we execute return strbuf_detach(&sb, NULL);, we are returning the address of the newly allocated buffer. Basically, we are returning the buffer stored at address Y. • This returned buffer is later freed in the caller of the function. So, we basically didn’t operate on the original buffer passed to the function and also in the process lost its address, so it never got freed! Hence, causing the memory leak! So, in order to fix this memory leak, I thought we need to operate on the original buffer and create a strbuf where we don’t allocate a new buffer, instead we use the same buffer in it. On looking in strbuf.c I came across the function strbuf_attach(). So, I just replaced the call to strbuf_addstr() with strbuf_attach(). Hence, fixing the memory leak! In order to verify if this actually fixed the memory leak, I printed the address of the commit_buf before passing to the replace_persons_using_mailmap() and after returning from the function. As we can see, the address is same, so we are not losing the ownership of the buffer and eventually freeing it! In order to debug this leak, I also took help of gdb. I am writing another detailed blog on how I used gdb to debug git. The next issue that we had was that mailmap was always enabled in git cat-file --batch. In order to fix it, I created a static global flag called use_mailmap, which was enabled when --use-mailmap option is passed. The next and very important task was to add tests for the changes that I have made in git-cat-file. This took some exploration on how the existing tests have been written. I added two tests in t4203-mailmap.sh as all the mailmap related tests are written there. In my tests, I have checked when giving --no-use-mailmap the mailmap mechanism is disabled and when using --use-mailmap it is enabled. All my changes which I have discussed in this blog can be found here. So yeah! that’s all for this week! I will keep working on improving my patch and will be back with updates the next week! Till then, goodbye and thanks for reading :)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2610724866390228, "perplexity": 2173.2536410433386}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948765.13/warc/CC-MAIN-20230328042424-20230328072424-00512.warc.gz"}
https://www.rdocumentation.org/packages/HH/versions/3.1-35/topics/likert
# likert From HH v3.1-35 0th Percentile ##### Diverging stacked barcharts for Likert, semantic differential, rating scale data, and population pyramids. Constructs and plots diverging stacked barcharts for Likert, semantic differential, rating scale data, and population pyramids. Keywords hplot, shiny ##### Usage likert(x, ...) likertplot(x, ...) # S3 method for likert plot(x, ...) # S3 method for formula plot.likert(x, data, ReferenceZero=NULL, value, levelsName="", scales.in=NULL, ## use scales= between=list(x=1 + (horizontal), y=.5 + 2*(!horizontal)), auto.key.in=NULL, ## use auto.key= panel.in=NULL, ## use panel= horizontal=TRUE, par.settings.in=NULL, ## use par.settings= ..., as.percent = FALSE, ## titles ylab= if (horizontal) { if (length(x)==3) deparse(x[[2]]) else "Question" } else if (as.percent != FALSE) "Percent" else "Count", xlab= if (!horizontal) { if (length(x)==3) deparse(x[[2]]) else "Question" } else if (as.percent != FALSE) "Percent" else "Count", main = x.sys.call, ## right axis rightAxisLabels = rowSums(data.list$Nums), rightAxis = !missing(rightAxisLabels), ylab.right = if (rightAxis) "Row Count Totals" else NULL, xlab.top = NULL, right.text.cex = if (horizontal) { ## lazy evaluation if (!is.null(scales$y$cex)) scales$y$cex else .8 } else { if (!is.null(scales$x$cex)) scales$x$cex else .8 }, ## scales xscale.components = xscale.components.top.HH, yscale.components = yscale.components.right.HH, xlimEqualLeftRight = FALSE, xTickLabelsPositive = TRUE, ## row sequencing as.table=TRUE, positive.order=FALSE, data.order=FALSE, reverse=ifelse(horizontal, as.table, FALSE), ## resizePanels arguments h.resizePanels=sapply(result$y.used.at, length), w.resizePanels=sapply(result$x.used.at, length), ## color options reference.line.col="gray65", key.border.white=TRUE, col=likertColor(Nums.attr$nlevels, ReferenceZero=ReferenceZero, colorFunction=colorFunction, colorFunctionOption=colorFunctionOption), colorFunction="diverge_hcl", colorFunctionOption="lighter" ) # S3 method for default plot.likert(x, positive.order=FALSE, ylab=names(dimnames(x)[1]), xlab=if (as.percent != FALSE) "Percent" else "Count", main=xName, reference.line.col="gray65", col.strip.background="gray97", col=likertColor(attr(x, "nlevels"), ReferenceZero=ReferenceZero, colorFunction=colorFunction, colorFunctionOption=colorFunctionOption), colorFunction="diverge_hcl", colorFunctionOption="lighter", as.percent=FALSE, par.settings.in=NULL, horizontal=TRUE, ReferenceZero=NULL, ..., key.border.white=TRUE, xName=deparse(substitute(x)), rightAxisLabels=rowSums(abs(x)), rightAxis=!missing(rightAxisLabels), ylab.right=if (rightAxis) "Row Count Totals" else NULL, panel=panel.barchart, xscale.components=xscale.components.top.HH, yscale.components=yscale.components.right.HH, xlimEqualLeftRight=FALSE, xTickLabelsPositive=TRUE, reverse=FALSE)# S3 method for array plot.likert(x, condlevelsName=paste("names(dimnames(", xName, "))[-(1:2)]", sep=""), xName=deparse(substitute(x)), main=paste("layers of", xName, "by", condlevelsName), ...)# S3 method for likert plot.likert(x, ...) ## See Details# S3 method for list plot.likert(x, ## named list of matrices, 2D tables, ## 2D ftables, or 2D structables, ## or all-numeric data.frames condlevelsName="ListNames", xName=deparse(substitute(x)), main=paste("List items of", xName, "by", condlevelsName), layout=if (length(dim.x) > 1) dim.x else { if (horizontal) c(1, length(x)) else c(length(x), 1)}, positive.order=FALSE, strip=!horizontal, strip.left=horizontal, strip.left.values=names(x), strip.values=names(x), strip.par=list(cex=1, lines=1), strip.left.par=list(cex=1, lines=1), horizontal=TRUE, ..., rightAxisLabels=sapply(x, function(x) rowSums(abs(x)), simplify = FALSE), rightAxis=!missing(rightAxisLabels), resize.height.tuning=-.5, resize.height=if (missing(layout) || length(dim.x) != 2) { c("nrow","rowSums") } else { rep(1, layout[2]) }, resize.width=if (missing(layout)) {1 } else { rep(1, layout[1]) }, box.ratio=if ( length(resize.height)==1 && resize.height == "rowSums") 1000 else 2, xscale.components=xscale.components.top.HH, yscale.components=yscale.components.right.HH)# S3 method for table plot.likert(x, ..., xName=deparse(substitute(x))) # S3 method for ftable plot.likert(x, ..., xName=deparse(substitute(x))) # S3 method for structable plot.likert(x, ..., xName=deparse(substitute(x))) # S3 method for data.frame plot.likert(x, ..., xName=deparse(substitute(x))) xscale.components.top.HH(...) yscale.components.right.HH(...) ##### Arguments x For the formula method, a model formula. All terms in the formula must be the names of columns in the data.frame argument data or the special abbreviation . only on the right-hand-side. Functions of the names will not work. The right-hand-side must be either . or the sum of the names of numeric variables in data. Non-syntactic names must be in quotes (single ' or double "), but not backticks . The . on the right-hand-side is expanded to the formula containing the sum of all remaining (after the response and the conditioning variables) numeric columns in data. An empty left-hand-side is interpreted as the rownames(data). See the examples for all possible forms of formula recognized by the likert function. Otherwise, any numeric object stored as a vector, matrix, array, data.frame, table, ftable, structable (as defined in the vcd package), or as a list of named two-dimensional objects. This is the only required argument. See the Details section for restrictions on the form of data.frame, list, ftable, and structable arguments. data For the formula method, a data.frame. Do not use variable names ".value" or ".variable". ReferenceZero Numeric scalar or NULL. The position in the range seq(0, attr(x, "nlevels")+.5, .5) where the reference line at 0 will be placed. attr(x, "nlevels") is the number of columns of the original argument x, before it has been coerced to a "likert" object. The default NULL corresponds to the middle level if there are an odd number of levels, and to half-way between the two middle levels if there are an even number of levels. This argument is used when the number of positive levels and the number of negative levels are not the same. For example, with 4 levels c("Disagree", "Neutral", "Weak Agree", "Strong Agree"), the argument would be specified ReferenceZero=2 indicating that the graphical split would be in the middle of the second group with label "Neutral". value Name of the numeric variable containing the data when the formula method is used with the long data form. The predictor in the formula will be a factor name. The name of the predictor will be used as the title in the key. levelsName (optional) Name of the implied factor distinguishing the columns of the response variables when the formula method is used with the wide data form. This name will be used as the title in the key. positive.order If FALSE, the default value, the original order of the rows is retained. This is necessary for arrays, because each panel has the same rownames. If TRUE, rows are ordered within each panel with the row whose bar goes farthest to the right at the top of a panel of horizontal bars or at the left of a panel of vertical bars. positive.order is frequently set to TRUE for lists. data.order formula method only. If positive.order is TRUE, this data.order variable is ignored. If FALSE, the default value, and the rows are specified by a factor, then they are ordered by their levels. If TRUE, then the rows are ordered by their order in the input data.frame. as.percent When as.percent==TRUE or as.percent=="noRightAxis", then the values in each row are rescaled to row percents. When as.percent==TRUE the original row totals are used as rightAxisLabels, rightAxis is set to TRUE, the ylab.right is by default set to "Row Count Totals" (the user can change its value in the calling sequence). When as.percent=="noRightAxis", then rightAxis will be set to FALSE. as.table Standard lattice argument. See barchart. par.settings.in, scales.in, auto.key.in, panel.in These are placeholders for lattice arguments that lets the user specify some lattice par.settings and still retain the ones that are prespecified in the plot.likert.default. ylab, xlab, ylab.right, xlab.top, main Standard lattice graph labels in barchart. right.text.cex The right axis, as used here for the "Row Count Totals", has non-standard controls. It's cex follows the cex of the left axis, unless this argument is used to override that value. When codehorizontal=FALSE, then the top axis defaults to follow the bottom axis unless overridden by right.text.cex. between Standard lattice argument. col Vector of color names for the levels of the agreement factor. Although the colors can be specified as an arbitrary vector of color names, for example, col=c('red','blue','#4AB3F2'), usually specifying one of the diverging palettes from diverge_hcl or sequential palettes from sequential_hcl will suffice. For less intense colors, you can use the middle colors from a larger set of colors; e.g., col=sequential_hcl(11)[5:2]. See the last AudiencePercent example below for this usage. colorFunction, colorFunctionOption See likertColor. reference.line.col Color for reference line at zero. col.strip.background Background color for the strip labels. key.border.white Logical. If TRUE, then place a white border around the rect in the key, else use the col of the rect itself. horizontal Logical, with default TRUE indicating horizontal bars, will be passed to the barchart function by the plot.likert method. In addition, it interchanges the meaning of resize.height and resize.width arguments to the likert functions applied to arrays and lists. other arguments. These will be passed to the barchart function by the plot.likert method. The most useful of these is the border argument which defaults to make the borders of the bars the same color as the bars themselves. A scalar alternative (border="white" being our first choice) puts a border around each bar in the stacked barchart. This works very well when the ReferenceZero line is between two levels. It gives a misleading division of the central bar when the ReferenceZero is in the middle of a level. See the example in the examples section. Arguments to the lattice auto.key=list() argument (described in barchart) will be used in the legend. See the examples. strip.left, strip Logical. The default strip.left=TRUE places the strip labels on the left of each panel as in the first professional challenges example. The alternative strip.left=FALSE puts the strip labels on the top of each panel, the traditional lattice strip label position. condlevelsName, strip.left.values, strip.values, strip.par, strip.left.par, layout Arguments which will be passed to ResizeEtc. xName Name of the argument in its original environment. rightAxis logical. Should right axis values be displayed? Defaults to FALSE unless rightAxisLabels are specified. rightAxisLabels Values to be displayed on the right axis. The default values are the row totals. These are sensible for tables of counts. When the data is rescaled to percents by the as.percent=TRUE argument, then the rightAxisLabels are still defaulted to the row totals for the counts. We illustrate this usage in the ProfChal example. resize.height.tuning Tuning parameter used to adjust the space between bars as specified by the resize.height argument to the ResizeEtc function. h.resizePanels, resize.height Either character scalar or numeric vector. If "nrow", then the panels heights are proportional to the number of bars in each panel. If "rowSums" and there is exactly one bar per panel, then the panels heights are proportional to the total count in each bar, and see the discussion of the box.ratio argument. If a numeric vector, the panel heights are proportional to the numbers in the argument. w.resizePanels, resize.width Numeric vector. The panel widths are proportional to the numbers in the argument. box.ratio If there are more than one bar in any panel, then this defaults to the trellis standard value of 2. If there is exactly one bar in a panel, then the value is 1000, with the intent to minimize the white space in the panel. In this way, when as.percent==TRUE, the bar total area is the count and the bar widths are all equal at 100%. See the example below. panel panel function eventually to be used by barchart. xscale.components, yscale.components See yscale.components.default. xscale.components.top.HH constructs the top x-axis labels, when needed, as the names of the bottom x-axis labels. yscale.components.right.HH constructs the right y-axis labels, when needed, as the names of the left y-axis labels. The names are placed automatically by the plot.likert methods based on the value of the arguments as.percent, rightAxis, and rightAxisLabels. By default, when rightAxis != FALSE the layout.widths are set to list(ylab.right=5, right.padding=0). Otherwise, those arguments are left at their default values. They may be adjusted with an argument of the form par.settings.in= list(layout.widths=list(ylab.right=5, right.padding=0)). Similarly, spacing for the top labels can be adjusted with an argument of the form par.settings.in=list(layout.heights=list(key.axis.padding=6)). xlimEqualLeftRight Logical. The default is FALSE. If TRUE and at and labels are not explicitly specified, then the left and right x limits are set to negative and positive of the larger of the absolute value of the original x limits. When !horizontal, this argument applies to the y coordinate. xTickLabelsPositive Logical. The default is TRUE. If TRUE and at and labels are not explicitly specified, then the tick labels on the negative side are displayed as positive values. When !horizontal, this argument applies to the y coordinate. reverse Logical. The default is FALSE. If TRUE, the rows of the input matrix are reversed. The default is to plot the rows from top-to-bottom for horizontal bars and from left-to-write for vertical bars. reverse, positive.order, and horizontal are independent. All eight combinations are possible. See the Eight sequences and orientations section in the example for all eight. ##### Details The counts (or percentages) of respondents on each row who agree with the statement are shown to the right of the zero line; the counts (or percentages) who disagree are shown to the left. The counts (or percentages) for respondents who neither agree nor disagree are split down the middle and are shown in a neutral color. The neutral category is omitted when the scale has an even number of choices. It is difficult to compare lengths without a common baseline. In this situation, we are primarily interested in the total count (or percent) to the right or left of the zero line; the breakdown into strongly or not is of lesser interest so that the primary comparisons do have a common baseline of zero. The rows within each panel are displayed in their original order by default. If the argument positive.order=TRUE is specified, the rows are ordered by the counts (or percentages) who agree. Diverging stacked barcharts are also called "two-directional stacked barcharts". Some authors use the term "floating barcharts" for vertical diverging stacked barcharts and the term "sliding barcharts" for horizontal diverging stacked barcharts. All items in a list of named two-dimensional objects must have the same number of columns. If the items have different column names, the column names of the last item in the list will be used in the key. If the dimnames of the matrices are named, the names will be used in the plot. It is possible to produce a likert plot with a list of objects with different numbers of columns, but not with the plot.likert.list method. These must be done manually by using the ResizeEtc function on each of the individual likert plots. The difficulty is that the legend is based on the last item in the list and will have the wrong number of values for some of the panels. A single data.frame x will be plotted as data.matrix(x[sapply(x, is.numeric)]). The subscripting on the class of the columns is there to remove columns of characters (which would otherwise be coerced to NA) and factor columns (which would otherwise be coerced to integers). A data.frame with only numeric columns will work in a named list. A list of data.frame with factors or characters will be plotted by automatically removing columns that are not numeric. ftable and structable arguments x will be plotted as as.table(x). This changes the display sequence. Therefore the user will probably want to use aperm on the ftable or structable before using plot.likert. The likert method is designed for use with "likert" objects created with the independent likert package. It is not recommended that the HH package and the likert package both be loaded at the same time, as they have incompatible usage of the exported function names likert and plot.likert. If the likert package is installed, it can be run without loading by using the function calls likert::likert() and likert:::plot.likert(). ##### Value A "trellis" object containing the plot. The plot will be automatically displayed unless the result is assigned to an object. ##### Note Documentation note: Most of the plots drawn by plot.likert have a long left-axis tick label. They therefore require a wider window than R's default of a nominal 7in $\times$ 7in window. The comments with the examples suggest aesthetic window sizes. Technical note: There are three (almost) equivalent calling sequences for likert plots. 1. likert(x) ## recommended likert is an alias for plot.likert(). 2. plot.likert(x) plot.likert is both a method of plot for "likert" objects, and a generic function in its own right. There are methods of plot.likert for "formula", "matrix", "array", "table", and several other classes of input objects. 3. plot(as.likert(x)) Both likert and plot.likert work by calling the as.likert function on their argument x. Once as.likert has converted its argument to a "likert" object, the method dispatch technology for the generic plot.likert is in play. The user can make the explicit call as.likert(x) to see what a "likert" object looks like, but is very unlikely to want to look a second time. ##### References Richard M. Heiberger, Naomi B. Robbins (2014)., "Design of Diverging Stacked Bar Charts for Likert Scales and Other Applications", Journal of Statistical Software, 57(5), 1--32, http://www.jstatsoft.org/v57/i05/. Richard Heiberger and Naomi Robbins (2011), "Alternative to Charles Blow's Figure in \"Newt's War on Poor Children\"", Forbes OnLine, December 20, 2011. http://www.forbes.com/sites/naomirobbins/2011/12/20/alternative-to-charles-blows-figure-in-newts-war-on-poor-children-2/ Naomi Robbins (2011), "Visualizing Data: Challenges to Presentation of Quality Graphics---and Solutions", Amstat News, September 2011, 28--30. http://magazine.amstat.org/blog/2011/09/01/visualizingdata/ Luo, Amy and Tim Keyes (2005). "Second Set of Results in from the Career Track Member Survey," Amstat News. Arlington, VA: American Statistical Association. barchart, ResizeEtc, as.likert, as.matrix.listOfNamedMatrices, pyramidLikert ##### Aliases • likert • plot.likert • likertplot • plot.likert.formula • plot.likert.default • plot.likert.array • plot.likert.likert • plot.likert.list • plot.likert.table • plot.likert.ftable • plot.likert.structable • plot.likert.data.frame • xscale.components.top.HH • yscale.components.right.HH • floating • pyramid • sliding • semantic • differential ##### Examples # NOT RUN { ## See file HH/demo/likert-paper.r for a complete set of examples using ## the formula method into the underlying lattice:::barchart plotting ## technology. See file HH/demo/likert-paper-noFormula.r for the same ## set of examples using the matrix and list of matrices methods. See ## file HH/demo/likertMosaic-paper.r for the same set of examples using ## the still experimental functions built on the vcd:::mosaic as the ## underlying plotting technology data(ProfChal) ## ProfChal is a data.frame. ## See below for discussion of the dataset. ## Count plot likert(Question ~ . , ProfChal[ProfChal$Subtable=="Employment sector",], main='Is your job professionally challenging?', ylab=NULL, sub="This plot looks better in a 9in x 4in window.") ## Percent plot calculated automatically from Count data likert(Question ~ . , ProfChal[ProfChal$Subtable=="Employment sector",], as.percent=TRUE, ylab=NULL, sub="This plot looks better in a 9in x 4in window.") ## formula method data(NZScienceTeaching) likert(Question ~ . | Subtable, data=NZScienceTeaching, ylab=NULL, scales=list(y=list(relation="free")), layout=c(1,2)) # } # NOT RUN { ## formula notation with expanded right-hand-side likert(Question ~ "Strongly disagree" + Disagree + Neutral + Agree + "Strongly agree" | Subtable, data=NZScienceTeaching, ylab=NULL, scales=list(y=list(relation="free")), layout=c(1,2)) # } # NOT RUN { # } # NOT RUN { ## formula notation with long data arrangement NZScienceTeachingLong <- reshape2::melt(NZScienceTeaching, id.vars=c("Question", "Subtable")) names(NZScienceTeachingLong)[3] <- "Agreement" likert(Question ~ Agreement | Subtable, value="value", data=NZScienceTeachingLong, ylab=NULL, scales=list(y=list(relation="free")), layout=c(1,2)) # } # NOT RUN { ## Examples with higher-dimensional arrays. tmp3 <- array(1:24, dim=c(2,3,4), dimnames=list(A=letters[1:2], B=LETTERS[3:5], C=letters[6:9])) ## positive.order=FALSE is the default. With arrays ## the rownames within each item of an array are identical. ## likert(tmp3) likert(tmp3, layout=c(1,4)) likert(tmp3, layout=c(2,2), resize.height=c(2,1), resize.width=c(3,4)) ## plot.likert interprets vectors as single-row matrices. ## http://survey.cvent.com/blog/customer-insights-2/box-scores-are-not-just-for-baseball Responses <- c(15, 13, 12, 25, 35) names(Responses) <- c("Strongly Disagree", "Disagree", "No Opinion", "Agree", "Strongly Agree") # } # NOT RUN { likert(Responses, main="Retail-R-Us offers the best everyday prices.", sub="This plot looks better in a 9in x 2.6in window.") # } # NOT RUN { ## reverse=TRUE is needed for a single-column key with ## horizontal=FALSE and with space="right" likert(Responses, horizontal=FALSE, aspect=1.5, main="Retail-R-Us offers the best everyday prices.", auto.key=list(space="right", columns=1, sub="This plot looks better in a 4in x 3in window.") # } # NOT RUN { ## Since age is always positive and increases in a single direction, ## this example uses colors from a sequential palette for the age ## groups. In this example we do not use a diverging palette that is ## appropriate when groups are defined by a characteristic, such as ## strength of agreement or disagreement, that can increase in two directions. ## Initially we use the default Blue palette in the sequential_hcl function. data(AudiencePercent) likert(AudiencePercent, auto.key=list(between=1, between.columns=2), xlab=paste("Percentage of audience younger than 35 (left of zero)", "and older than 35 (right of zero)"), main="Target Audience", col=rev(colorspace::sequential_hcl(4)), sub="This plot looks better in a 7in x 3.5in window.") ## The really light colors in the previous example are too light. ## Therefore we use the col argument directly. We chose to use an ## intermediate set of Blue colors selected from a longer Blue palette. likert(AudiencePercent, positive.order=TRUE, auto.key=list(between=1, between.columns=2), xlab=paste("Percentage of audience younger than 35", "(left of zero) and older than 35 (right of zero)"), main="Brand A has the most even distribution of ages", col=colorspace::sequential_hcl(11)[5:2], scales=list(x=list(at=seq(-90,60,10), labels=as.vector(rbind("",seq(-80,60,20))))), sub="This plot looks better in a 7in x 3.5in window.") # } # NOT RUN { # } # NOT RUN { ## See the ?as.pyramidLikert help page for these examples ## Population Pyramid data(USAge.table) USA79 <- USAge.table[75:1, 2:1, "1979"]/1000000 PL <- likert(USA79, main="Population of United States 1979 (ages 0-74)", xlab="Count in Millions", ylab="Age", scales=list( y=list( limits=c(0,77), at=seq(1,76,5), labels=seq(0,75,5), tck=.5)) ) PL as.pyramidLikert(PL) likert(USAge.table[75:1, 2:1, c("1939","1959","1979")]/1000000, main="Population of United States 1939,1959,1979 (ages 0-74)", sub="Look for the Baby Boom", xlab="Count in Millions", ylab="Age", scales=list( y=list( limits=c(0,77), at=seq(1,76,5), labels=seq(0,75,5), tck=.5)), strip.left=FALSE, strip=TRUE, layout=c(3,1), between=list(x=.5)) # } # NOT RUN { Pop <- rbind(a=c(3,2,4,9), b=c(6,10,12,10)) dimnames(Pop)[[2]] <- c("Very Low", "Low", "High", "Very High") likert(as.listOfNamedMatrices(Pop), as.percent=TRUE, resize.height="rowSums", strip=FALSE, strip.left=FALSE, main=paste("Area and Height are proportional to 'Row Count Totals'.", "Width is exactly 100%.", sep="\n")) ## Professional Challenges example. ## ## The data for this example is a list of related likert scales, with ## each item in the list consisting of differently named rows. The data ## is from a questionnaire analyzed in a recent Amstat News article. ## The study population was partitioned in several ways. Data from one ## of the partitions (Employment sector) was used in the first example ## in this help file. The examples here show various options for ## displaying all partitions on the same plot. ## data(ProfChal) levels(ProfChal$Subtable)[6] <- "Prof Recog" ## reduce length of label ## 1. Plot counts with rows in each panel sorted by positive counts. ## # } # NOT RUN { likert(Question ~ . | Subtable, ProfChal, positive.order=TRUE, main="This works, but needs more specified arguments to look good") likert(Question ~ . | Subtable, ProfChal, scales=list(y=list(relation="free")), layout=c(1,6), positive.order=TRUE, between=list(y=0), strip=FALSE, strip.left=strip.custom(bg="gray97"), par.strip.text=list(cex=.6, lines=5), main="Is your job professionally challenging?", ylab=NULL, sub="This looks better in a 10inx7in window") # } # NOT RUN { ProfChalCountsPlot <- likert(Question ~ . | Subtable, ProfChal, scales=list(y=list(relation="free")), layout=c(1,6), positive.order=TRUE, box.width=unit(.4,"cm"), between=list(y=0), strip=FALSE, strip.left=strip.custom(bg="gray97"), par.strip.text=list(cex=.6, lines=5), main="Is your job professionally challenging?", rightAxis=TRUE, ## display Row Count Totals ylab=NULL, sub="This looks better in a 10inx7in window") ProfChalCountsPlot # } # NOT RUN { ## 2. Plot percents with rows in each panel sorted by positive percents. ## This is a different sequence than the counts. Row Count Totals are ## displayed on the right axis. ProfChalPctPlot <- likert(Question ~ . | Subtable, ProfChal, as.percent=TRUE, ## implies display Row Count Totals scales=list(y=list(relation="free")), layout=c(1,6), positive.order=TRUE, box.width=unit(.4,"cm"), between=list(y=0), strip=FALSE, strip.left=strip.custom(bg="gray97"), par.strip.text=list(cex=.6, lines=5), main="Is your job professionally challenging?", rightAxis=TRUE, ## display Row Count Totals ylab=NULL, sub="This looks better in a 10inx7in window") ProfChalPctPlot ## 3. Putting both percents and counts on the same plot, both in ## the order of the positive percents. LikertPercentCountColumns(Question ~ . | Subtable, ProfChal, layout=c(1,6), scales=list(y=list(relation="free")), ylab=NULL, between=list(y=0), strip.left=strip.custom(bg="gray97"), strip=FALSE, par.strip.text=list(cex=.7), positive.order=TRUE, main="Is your job professionally challenging?") ## Restore original name ## levels(ProfChal$Subtable)[6] <- "Attitude\ntoward\nProfessional\nRecognition" # } # NOT RUN { # } # NOT RUN { ## 4. All possible forms of formula for the likert formula method: data(ProfChal) row.names(ProfChal) <- abbreviate(ProfChal$Question, 8) likert( Question ~ . | Subtable, data=ProfChal, scales=list(y=list(relation="free")), layout=c(1,6)) likert( Question ~ "Strongly Disagree" + Disagree + "No Opinion" + Agree + "Strongly Agree" | Subtable, data=ProfChal, scales=list(y=list(relation="free")), layout=c(1,6)) likert( Question ~ . , data=ProfChal) likert( Question ~ "Strongly Disagree" + Disagree + "No Opinion" + Agree + "Strongly Agree", data=ProfChal) likert( ~ . | Subtable, data=ProfChal, scales=list(y=list(relation="free")), layout=c(1,6)) likert( ~ "Strongly Disagree" + Disagree + "No Opinion" + Agree + "Strongly Agree" | Subtable, data=ProfChal, scales=list(y=list(relation="free")), layout=c(1,6)) likert( ~ . , data=ProfChal) likert( ~ "Strongly Disagree" + Disagree + "No Opinion" + Agree + "Strongly Agree", data=ProfChal) # } # NOT RUN { # } # NOT RUN { ## 5. putting the x-axis tick labels on top for horizontal plots ## putting the y-axis tick lables on right for vertical plots ## ## This non-standard specification is a consequence of using the right ## axis labels for different values than appear on the left axis labels ## with horizontal plots, and using the top axis labels for different ## values than appear on the bottom axis labels with vertical plots. ## Percent plot calculated automatically from Count data tmph <- likert(Question ~ . , ProfChal[ProfChal$Subtable=="Employment sector",], as.percent=TRUE, ylab=NULL, sub="This plot looks better in a 9in x 4in window.") tmph$x.scales$labels names(tmph$x.scales$labels) <- tmph$x.scales$labels update(tmph, scales=list(x=list(alternating=2)), xlab=NULL, xlab.top="Percent") tmpv <- likert(Question ~ . , ProfChal[ProfChal$Subtable=="Employment sector",], as.percent=TRUE, main='Is your job professionally challenging?', sub="likert plots with long Question names look better horizontally. With effort they can be made to look adequate vertically.", horizontal=FALSE, scales=list(y=list(alternating=2), x=list(rot=c(90, 0))), ylab.right="Percent", ylab=NULL, xlab.top="Column Count Totals", par.settings=list( layout.heights=list(key.axis.padding=5), layout.widths=list(key.right=1.5, right.padding=0)) ) tmpv$y.scales$labels names(tmpv$y.scales$labels) <- tmpv$y.scales$labels tmpv tmpv$x.limits <- abbreviate(tmpv$x.limits,8) tmpv$x.scales\$rot=c(0, 0) tmpv # } # NOT RUN { # } # NOT RUN { ## illustration that a border on the bars is misleading when it splits a bar. tmp <- data.frame(a=1, b=2, c=3) likert(~ . , data=tmp, ReferenceZero=2, main="No border. OK.") likert(~ . , data=tmp, ReferenceZero=2, border="white", main="Border. Misleading split of central bar.") likert(~ . , data=tmp, ReferenceZero=2.5, main="No border. OK.") likert(~ . , data=tmp, ReferenceZero=2.5, border="white", main="Border. OK.") # } # NOT RUN { # } # NOT RUN { ## run the shiny app shiny::runApp(system.file("shiny/likert", package="HH")) # } # NOT RUN { ## The ProfChal data is done again with explicit use of ResizeEtc ## in ?HH:::ResizeEtc # } ` Documentation reproduced from package HH, version 3.1-35, License: GPL (>= 2) ### Community examples Looks like there are no examples yet.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31357109546661377, "perplexity": 5328.557088274904}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146342.41/warc/CC-MAIN-20200226115522-20200226145522-00427.warc.gz"}
https://www.gamedev.net/forums/topic/639010-interpolation-of-four-points-grid-corners/
# Interpolation of four points (grid corners) This topic is 1976 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts Hi all, Consider, if you will, this image: http://i.imgur.com/xsZ6duv.png Assuming that the four corners there have n value, I'm looking for a way to determine the best values for the red dots (or, actually, and point within the square).  I'm trying to determine light values at a finer grain than just finding the nearest corner and using it. Is there a way to do this?  I tried simple linear interpolation combining both directions, but the results aren't satisfactory, so I think the way I thought up is fatally flawed. Thanks for any help! --JR ##### Share on other sites I usually do something like interpolate between the TL and TR values according to the X position to get a Top value, then interpolate between the BL and BR values according to the X position to get a Bottom value, then interpolate between my calculated Top value and my calculated Bottom value according to the Y position to get a final result. Not sure if there's better options out there. ##### Share on other sites Do you happen to have a scrap of code I can look at?  Cuz I'm doing something similar to that, I think-- I'm getting a left right value, and then modulating it against an up-down value.  Like I said, the results are... much worse than I would have expected, so I'm clearly doing something wrong. ##### Share on other sites Have you tried looking into Bilinear interpolation? ##### Share on other sites Do you happen to have a scrap of code I can look at?  Cuz I'm doing something similar to that, I think-- I'm getting a left right value, and then modulating it against an up-down value.  Like I said, the results are... much worse than I would have expected, so I'm clearly doing something wrong. Here's some code from my perlin noise code. It uses a cosine interpolation for extra smoothness in ground height, but a straight lerp should work OK too. // Separate the base and the fraction parts. int iBaseX = (int)floorf(vInput.GetX()); float fFractionX = vInput.GetX() - (float)iBaseX; int iBaseY = (int)floorf(vInput.GetY()); float fFractionY = vInput.GetY() - (float)iBaseY; // Cosine interpolation float fTL = GetDiscreteNoise2D(iBaseX, iBaseY, pPrimeSet); float fTR = GetDiscreteNoise2D(iBaseX + 1, iBaseY, pPrimeSet); float fBL = GetDiscreteNoise2D(iBaseX, iBaseY + 1, pPrimeSet); float fBR = GetDiscreteNoise2D(iBaseX + 1, iBaseY + 1, pPrimeSet); float fT = CosInterp(fTL, fTR, fFractionX); float fB = CosInterp(fBL, fBR, fFractionX); float fRet = CosInterp(fT, fB, fFractionY); return fRet; 1. 1 Rutin 24 2. 2 3. 3 JoeJ 20 4. 4 5. 5 • 9 • 46 • 41 • 23 • 13 • ### Forum Statistics • Total Topics 631749 • Total Posts 3002033 ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24508659541606903, "perplexity": 3857.3058618574673}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590329.62/warc/CC-MAIN-20180718213135-20180718233135-00073.warc.gz"}
http://mg.is.tuebingen.mpg.de/publications/herzog_structured_2016
Structured contact force optimization for kino-dynamic motion generation mg Optimal control approaches in combination with trajectory optimization have recently proven to be a promising control strategy for legged robots. Computationally efficient and robust algorithms were derived using simplified models of the contact interaction between robot and environment such as the linear inverted pendulum model (LIPM). However, as humanoid robots enter more complex environments, less restrictive models become increasingly important. As we leave the regime of linear models, we need to build dedicated solvers that can compute interaction forces together with consistent kinematic plans for the whole-body. In this paper, we address the problem of planning robot motion and interaction forces for legged robots given predefined contact surfaces. The motion generation process is decomposed into two alternating parts computing force and motion plans in coherence. We focus on the properties of the momentum computation leading to sparse optimal control formulations to be exploited by a dedicated solver. In our experiments, we demonstrate that our motion generation algorithm computes consistent contact forces and joint trajectories for our humanoid robot. We also demonstrate the favorable time complexity due to our formulation and composition of the momentum equations. Author(s): Herzog, Alexander and Schaal, Stefan and Righetti, Ludovic Book Title: 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) Pages: 2703--2710 Year: 2016 Publisher: IEEE Department(s): Autonomous Motion, Movement Generation and Control Bibtex Type: Conference Paper (inproceedings) DOI: 10.1109/IROS.2016.7759420 Address: Daejeon, South Korea URL: https://arxiv.org/abs/1605.08571 BibTex @inproceedings{herzog_structured_2016, title = {Structured contact force optimization for kino-dynamic motion generation}, author = {Herzog, Alexander and Schaal, Stefan and Righetti, Ludovic}, booktitle = {2016 {IEEE}/{RSJ} {International} {Conference} on {Intelligent} {Robots} and {Systems} ({IROS})}, pages = {2703--2710}, publisher = {IEEE}, address = {Daejeon, South Korea}, year = {2016}, url = {https://arxiv.org/abs/1605.08571} }
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3248685300350189, "perplexity": 4436.2227672966865}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655896169.35/warc/CC-MAIN-20200708000016-20200708030016-00213.warc.gz"}
https://it.mathworks.com/help/dsp/ug/fir-halfband-filter-design.html
# FIR Halfband Filter Design This example shows how to design FIR halfband filters. Halfband filters are widely used in multirate signal processing applications when interpolating or decimating by a factor of two. Halfband filters are implemented efficiently in polyphase form because approximately half of the halfband filter coefficients are equal to zero. Halfband filters have two important characteristics: • The passband and stopband ripples must be the same. • The passband-edge and the stopband-edge frequencies are equidistant from the halfband frequency $\frac{\mathrm{Fs}}{4}$ (or $\frac{\pi }{2}$ rad/sample in normalized frequency). ### Obtaining the Halfband Coefficients The `firhalfband` function returns the coefficients of an FIR halfband equiripple filter. As a simple example, consider a halfband filter dealing with data sampled at 96 kHz and has a passband frequency of 22 kHz. ```Fs = 96e3; Fp = 22e3; N = 100; num = firhalfband(N,Fp/(Fs/2)); zerophase(num,1,linspace(0,Fs/2,512),Fs);``` By zooming into the response, you can verify that the passband and stopband peak-to-peak ripples are the same. Also there is symmetry about the $\frac{\mathrm{Fs}}{4}$ (24 kHz) point. The passband extends up to 22 kHz as specified and the stopband begins at 26 kHz. We can also verify that every other coefficient is equal to zero by looking at the impulse response. This makes the filter very efficient to implement for interpolation or decimation by a factor of 2. ```fvt = fvtool(num,Fs=Fs); fvt.Analysis = "impulse";``` ### `dsp.FIRHalfbandInterpolator` and `dsp.FIRHalfbandDecimator` The `firhalfband` function provides several other design options. However, using `dsp.FIRHalfbandInterpolator` and `dsp.FIRHalfbandDecimator` System objects is recommended when working with streaming data. These two System objects not only design the coefficients, but also provide efficient polyphase implementation. They support filtering double, single precision floating-point data as well as fixed-point data. They also support C and HDL code generation as well as optimized ARM® Cortex® M and ARM® Cortex® A code generation. ```halfbandInterpolator = dsp.FIRHalfbandInterpolator(SampleRate=Fs,... Specification="Filter order and transition width",... FilterOrder=N,TransitionWidth=4000); fvt = fvtool(halfbandInterpolator,Fs=2*Fs); %#ok<NASGU> ``` In order to perform the interpolation, use the `dsp.FIRHalfbandInterpolator` System object™. Because this is a multirate filter, it is important to define what is meant by the sample rate. For this and all other System objects, the sample rate refers to the sample rate of the input signal. However, FVTool defines the sample rate as the rate at which the filter is running. In the case of interpolation, you upsample and then filter (conceptually), therefore the sample rate of FVTool needs to be specified as $2\mathrm{Fs}$ because of the upsampling by 2. ```FrameSize = 256; scope = spectrumAnalyzer(SampleRate=2*Fs); sine1 = dsp.SineWave(Frequency=10e3,SampleRate=Fs,... SamplesPerFrame=FrameSize); sine2 = dsp.SineWave(Frequency=20e3,SampleRate=Fs,... SamplesPerFrame=FrameSize); tic while toc < 10 x = sine1() + sine2() + 0.01.*randn(FrameSize,1); % 96 kHz y = halfbandInterpolator(x); % 192 kHz scope(y); end release(scope);``` Notice that the spectral replicas are attenuated by about 40 dB which is roughly the attenuation provided by the halfband filter. You can obtain a plot with the interpolated samples overlaid on the input samples by compensating for the group-delay of the filter. Notice that the input samples remain unchanged at the output of the filter. This is because one of the polyphase branches of the halfband filter is a pure delay branch which does not change the input samples. ```grpDel = 50; n = 0:2:511; stem(n(1:end-grpDel/2),x(1:end-grpDel/2),"k","filled") hold on nu = 0:511; stem(nu(1:end-grpDel),y(grpDel+1:end)) legend("Input samples","Interpolated samples")``` In the case of decimation, the sample rate specified in `dsp.FIRHalfbandDecimator` corresponds to the sample rate of the filter, since the object filters and then downsamples (conceptually). So for decimators, the$\text{\hspace{0.17em}}\mathrm{Fs}$ specified in FVTool does not need to be multiplied by any factor. ```FrameSize = 256; FsIn = 2*Fs; halfbandDecimator = dsp.FIRHalfbandDecimator(SampleRate=FsIn,... Specification="Filter order and transition width",... FilterOrder=N,TransitionWidth=4000); fvt = fvtool(halfbandDecimator,Fs=FsIn);%#ok<NASGU> scope = spectrumAnalyzer(SampleRate=Fs); sine1 = dsp.SineWave(Frequency=10e3,SampleRate=Fs,... SamplesPerFrame=FrameSize); sine2 = dsp.SineWave(Frequency=20e3,SampleRate=Fs,... SamplesPerFrame=FrameSize); tic while toc < 10 x = sine1() + sine2() + 0.01.*randn(FrameSize,1); % 96 kHz y = halfbandInterpolator(x); % 192 kHz xd = halfbandDecimator(y); % 96 kHz scope(xd); end``` `release(scope);` ### Obtaining the Filter Coefficients The filter coefficients can be extracted from the interpolator/decimator by using the `tf` function. `num = tf(halfbandInterpolator); % Or num = tf(halfbandDecimator);` ### Using Different Design Specifications Instead of specifying the filter order and transition width, you can design a minimum-order filter that provides a given transition width as well as a given stopband attenuation. ```Ast = 80; % 80 dB halfbandInterpolator = dsp.FIRHalfbandInterpolator(SampleRate=Fs,... Specification="Transition width and stopband attenuation",... StopbandAttenuation=Ast,TransitionWidth=4000); fvtool(halfbandInterpolator,Fs=2*Fs);``` Notice that as with all interpolators, the passband gain in absolute units is equal to the interpolation factor (2 in the case of halfbands). This corresponds to a passband gain of 6.02 dB. It is also possible to specify the filter order and the stopband attenuation. ```halfbandDecimator = dsp.FIRHalfbandDecimator(SampleRate=Fs,... Specification="Filter order and stopband attenuation",... StopbandAttenuation=Ast,FilterOrder=N); fvtool(halfbandDecimator,Fs=Fs);``` Unlike interpolators, decimators have a gain of 1 (0 dB) in the passband. ### Using Halfband Filters for Filter Banks Halfband interpolators and decimators can be used to efficiently implement synthesis/analysis filter banks. The halfband filters shown so far have all been lowpass filters. With a single extra adder, it is possible to obtain a highpass response in addition to the lowpass response and use the two responses for the filter bank implementation. The following code simulates a quadrature mirror filter (QMF) bank. An 8 kHz signal consisting of 1 kHz and 3 kHz sine waves is separated into two 4 kHz signals using a lowpass/highpass halfband decimator. The lowpass signal retains the 1 kHz sine wave while the highpass signal retains the 3 kHz sine wave (which is aliased to 1 kHz after downsampling). The signals are then merged back together with a synthesis filter bank using a halfband interpolator. The highpass branch upconverts the aliased 1 kHz sine wave back to 3 kHz. The interpolated signal has an 8 kHz sample rate. ```Fs1 = 8000; % Units = Hz Spec = "Filter order and transition width"; Order = 52; TW = 4.1e2; % Units = Hz % Construct FIR Halfband Interpolator halfbandInterpolator = dsp.FIRHalfbandInterpolator( ... Specification=Spec,... FilterOrder=Order,... TransitionWidth=TW,... SampleRate=Fs1/2,... FilterBankInputPort=true); % Construct FIR Halfband Decimator halfbandDecimator = dsp.FIRHalfbandDecimator( ... Specification=Spec,... FilterOrder=Order,... TransitionWidth=TW,... SampleRate=Fs1); % Input f1 = 1000; f2 = 3000; InputWave = dsp.SineWave(Frequency=[f1,f2],SampleRate=Fs1,... SamplesPerFrame=1024,Amplitude=[1 0.25]); % Construct Spectrum Analyzer object to view the input and output scope = spectrumAnalyzer(SampleRate=Fs1,... PlotAsTwoSidedSpectrum=false,ShowLegend=true,... YLimits=[-120 30],... Title="Input Signal and Output Signal of Quadrature Mirror Filter",... ChannelNames={"Input","Output"}); %#ok<CLARRSTR> tic while toc < 10 Input = sum(InputWave(),2); NoisyInput = Input+(10^-5)*randn(1024,1); [Lowpass,Highpass] = halfbandDecimator(NoisyInput); Output = halfbandInterpolator(Lowpass,Highpass); scope([NoisyInput,Output]); end release(scope);``` ### Advanced Design Options: Specifying Different Design Algorithms All designs presented so far have been optimal equiripple designs. `dsp.FIRHalfbandDecimator` and `dsp.FIRHalfbandInterpolator` System objects can also design their filters using the Kaiser window method. ```Fs = 44.1e3; N = 90; TW = 1000; equirippleHBFilter = dsp.FIRHalfbandInterpolator(DesignMethod="Equiripple",... Specification="Filter order and transition width",... SampleRate=Fs,... TransitionWidth=TW,... FilterOrder=N); kaiserHBFilter = dsp.FIRHalfbandInterpolator(DesignMethod="Kaiser",... Specification="Filter order and transition width",... SampleRate=Fs,... TransitionWidth=TW,... FilterOrder=N); ``` You can compare the designs with FVTool. The two designs allow for tradeoffs between minimum stopband attenuation and larger overall attenuation. ```fvt = fvtool(equirippleHBFilter,kaiserHBFilter,Fs=2*Fs); legend(fvt,"Equiripple design","Kaiser-window design")``` If you use the`fdesign.interpolator` and `fdesign.decimator` objects, other design algorithms, such as Least-square linear-filter FIR filter design are available. To determine the list of available design methods for a given filter specification object, use the `designmethods` function. ```filtSpecs = fdesign.interpolator(2,"halfband","N,TW",N,TW/Fs); designmethods(filtSpecs,"FIR");``` ```FIR Design Methods for class fdesign.interpolator (N,TW): equiripple firls kaiserwin ``` ### Controlling the Stopband Attenuation Alternatively, one can specify the order and the stopband attenuation. This allows for tradeoffs between overall stopband attenuation and transition width. ```Ast = 60; % Minimum stopband attenuation equirippleHBFilter = dsp.FIRHalfbandInterpolator(DesignMethod="Equiripple",... Specification="Filter order and stopband attenuation",... SampleRate=Fs,... StopbandAttenuation=Ast,... FilterOrder=N); kaiserHBFilter = dsp.FIRHalfbandInterpolator(DesignMethod="Kaiser",... Specification="Filter order and stopband attenuation",... SampleRate=Fs,... StopbandAttenuation=Ast,... FilterOrder=N); fvt = fvtool(equirippleHBFilter,kaiserHBFilter,Fs=2*Fs); legend(fvt,"Equiripple design","Kaiser-window design")``` ### Minimum-Order Designs Kaiser window designs can also be used in addition to equiripple designs when designing a filter of the minimum-order necessary to meet the design specifications. The actual order for the Kaiser window design is larger than that needed for the equiripple design, but the overall stopband attenuation is better in return. ```Fs = 44.1e3; TW = 1000; % Transition width Ast = 60; % 60 dB minimum attenuation in the stopband equirippleHBFilter = dsp.FIRHalfbandDecimator(DesignMethod="Equiripple",... Specification="Transition width and stopband attenuation",... SampleRate=Fs,... TransitionWidth=TW,... StopbandAttenuation=Ast); kaiserHBFilter = dsp.FIRHalfbandDecimator(DesignMethod="Kaiser",... Specification="Transition width and stopband attenuation",... SampleRate=Fs,... TransitionWidth=TW,... StopbandAttenuation=Ast); fvt = fvtool(equirippleHBFilter,kaiserHBFilter); legend(fvt,"Equiripple design","Kaiser-window design")``` ### Automatic Choice of Filter Design Technique In addition to `"Equiripple"` and `"Kaiser"`, the `DesignMethod` property of `dsp.FIRHalfbandDecimator` and `dsp.FIRHalfbandInterpolator S`ystem objects can also be specified as `"Auto"`. When `DesignMethod` is set to `"Auto"`, the filter design method is chosen automatically by the object based on the filter design parameters. ```Fs = 44.1e3; TW = 1000; % Transition width Ast = 60; % 60 dB minimum attenuation in the stopband autoHBFilter = dsp.FIRHalfbandDecimator(DesignMethod="Auto",... Specification="Transition width and stopband attenuation",... SampleRate=Fs,... TransitionWidth=TW,... StopbandAttenuation=Ast); fvt = fvtool(autoHBFilter); legend(fvt,"DesignMethod = Auto");``` For the above filter specifications, you can observe from the magnitude response that the System object designs an equiripple filter. If the design constraints are very tight such as a very high stopband attenuation or a very narrow transition width, then the algorithm automatically chooses the Kaiser window method. The Kaiser window method is optimal to design filters with very tight specifications. However, if the design constraints are not tight, then the algorithm performs equiripple design. The following illustrates a case where the filter specifications are too tight to perform equiripple design. The `DesignMethod` property of the object is set to `"Equiripple"`. Hence the object attempts to design the filter using equiripple characteristics and the design fails to converge, resulting in warnings generated about convergence. ```Fs = 192e3; TW = 100; % Transition width Ast = 180; % 180 dB minimum attenuation in the stopband equirippleHBFilter = dsp.FIRHalfbandDecimator(DesignMethod="Equiripple",... TransitionWidth=TW,... StopbandAttenuation=Ast,... SampleRate=Fs); fvt = fvtool(equirippleHBFilter);``` ```Warning: Final filter order of 10448 is probably too high to optimally meet the constraints. ``` ```Warning: Design is not converging. Number of iterations was 5 1) Check the resulting filter using freqz. 2) Check the specifications. 3) Filter order may be too large or too small. 4) For multiband filters, try making the transition regions more similar in width. If err is very small, filter order may be too high ``` `legend(fvt,"DesignMethod = Equiripple");` In this case, it is possible to design a filter that converges in design by setting the `DesignMethod` property to "`Auto"` or "`Kaiser"`, and the object designs the halfband filter using the Kaiser window method. ```Fs = 192e3; TW = 100; % Transition width Ast = 180; % 180 dB minimum attenuation in the stopband autoHBFilter = dsp.FIRHalfbandDecimator(DesignMethod="Auto",... TransitionWidth=TW,... StopbandAttenuation=Ast,... SampleRate=Fs); fvt = fvtool(autoHBFilter); legend(fvt,"DesignMethod = Auto");``` ### Equiripple Designs with Increasing Stopband Attenuation Using the `fdesign.interpolator` and `fdesign.decimator `objects, you can also modify the shape of the stopband in equiripple design by specifying the optional `"StopbandShape" `argument of the `design` function. ```Fs = 44.1e3; TW = 1000/(Fs/2); % Transition width Ast = 60; % 60 dB minimum attenuation in the stopband filtSpecs = fdesign.decimator(2,"halfband","TW,Ast",TW,Ast); equirippleHBFilter1 = design(filtSpecs,"equiripple",... StopbandShape="1/f",StopbandDecay=4,SystemObject=true); equirippleHBFilter2 = design(filtSpecs,"equiripple",... StopbandShape="linear",StopbandDecay=53.333,SystemObject=true); fvt = fvtool(equirippleHBFilter1,equirippleHBFilter2,... Fs=Fs); legend(fvt,"Stopband decaying as (1/f)^4","Stopband decaying linearly")``` ### Highpass Halfband Filters A highpass halfband filter can be obtained from a lowpass halfband filter by changing the sign of every second coefficient. Alternatively, one can directly design a highpass halfband by setting the `Type` property of the `fdesign.decimator` object to `"Highpass`". ```filtSpecs = fdesign.decimator(2,"halfband",... "TW,Ast",TW,Ast,Type="Highpass"); halfbandHPFilter = design(filtSpecs,"equiripple",... StopbandShape="linear",StopbandDecay=53.333,SystemObject=true); fvt = fvtool(halfbandHPFilter,equirippleHBFilter2,Fs=Fs); legend(fvt,"Highpass halfband filter","Lowpass halfband filter")```
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6060122847557068, "perplexity": 1985.8675910857276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104669950.91/warc/CC-MAIN-20220706090857-20220706120857-00023.warc.gz"}
http://home.fnal.gov/~mrenna/lutp0613man2/node74.html
Next: Cross-section Calculations Up: Process Generation Previous: Kinematics and Cross Section   Contents ## Resonance Production The simplest way to produce a resonance is by a process. If the decay of the resonance is not considered, the cross-section formula does not depend on , but takes the form (84) Here the physics is contained in the cross section . The scale is usually taken to be . In published formulae, cross sections are often given in the zero-width approximation, i.e. , where is the mass of the resonance. Introducing the scaled mass , this corresponds to a delta function , which can be used to eliminate the integral over . However, what we normally want to do is replace the function by the appropriate Breit-Wigner shape. For a resonance width this is achieved by the replacement (85) In this formula the resonance width is a constant. An improved description of resonance shapes is obtained if the width is made -dependent (occasionally also referred to as mass-dependent width, since is not always the resonance mass), see e.g. [Ber89]. To first approximation, this means that the expression is to be replaced by , both in the numerator and the denominator. An intermediate step is to perform this replacement only in the numerator. This is convenient when not only -channel resonance production is simulated but also non-resonance - or -channel graphs are involved, since mass-dependent widths in the denominator here may give an imperfect cancellation of divergences. (More about this below.) To be more precise, in the program the quantity is introduced, and the Breit-Wigner is written as (86) The factor is evaluated as a sum over all possible final-state channels, . Each decay channel may have its own dependence, as follows. A decay to a fermion pair, , gives no contribution below threshold, i.e. for . Above threshold, is proportional to , multiplied by a threshold factor for the vector part of a spin 1 resonance, by for the axial vector part, by for a scalar resonance and by for a pseudoscalar one. Here . For the decay into unequal masses, e.g. of the , corresponding but more complicated expressions are used. For decays into a quark pair, a first-order strong correction factor is included in . This is the correct choice for all spin 1 colourless resonances, but is here used for all resonances where no better knowledge is available. Currently the major exception is top decay, where the factor is used to approximate loop corrections [Jez89]. The second-order corrections are often known, but then are specific to each resonance, and are not included. An option exists for the resonances, where threshold effects due to bound-state formation are taken into account in a smeared-out, average sense, see eq. (). For other decay channels, not into fermion pairs, the dependence is typically more complicated. An example would be the decay , with a nontrivial threshold and a subtle energy dependence above that [Sey95a]. Since a Higgs with could still decay in this channel, it is in fact necessary to perform a two-dimensional integral over the Breit-Wigner mass distributions to obtain the correct result (and this has to be done numerically, at least in part). Fortunately, a Higgs particle lighter than is sufficiently narrow that the integral only needs to be performed once and for all at initialization (whereas most other partial widths are recalculated whenever needed). Channels that proceed via loops, such as , also display complicated threshold behaviours. The coupling structure within the electroweak sector is usually (re)expressed in terms of gauge boson masses, and , i.e. factors of are replaced according to (87) Having done that, is allowed to run [Kle89], and is evaluated at the scale. Thereby the relevant electroweak loop correction factors are recovered at the scale. However, the option exists to go the other way and eliminate in favour of . Currently is not allowed to run. For Higgs particles and technipions, fermion masses enter not only in the kinematics but also as couplings. The latter kind of quark masses (but not the former, at least not in the program) are running with the scale of the process, i.e. normally the resonance mass. The expression used is [Car96] (88) Here is the input mass at a reference scale , defined in the scheme. Typical choices are either or ; the latter would be relevant if the reference scale is chosen at the threshold. Both and are as given in . In summary, we see that an dependence may enter several different ways into the expressions from which the total is built up. When only decays to a specific final state are considered, the in the denominator remains the sum over all allowed decay channels, but the numerator only contains the term of the final state considered. If the combined production and decay process is considered, the same dependence is implicit in the coupling structure of as one would have had in , i.e. to first approximation there is a symmetry between couplings of a resonance to the initial and to the final state. The cross section is therefore, in the program, written in the form (89) As a simple example, the cross section for the process can be written as (90) where (91) If the effects of several initial and/or final states are studied, it is straightforward to introduce an appropriate summation in the numerator. The analogy between the and cannot be pushed too far, however. The two differ in several important aspects. Firstly, colour factors appear reversed: the decay contains a colour factor enhancement, while is instead suppressed by a factor . Secondly, the first-order correction factor for the final state has to be replaced by a more complicated factor for the initial state. This factor is not known usually, or it is known (to first non-trivial order) but too lengthy to be included in the program. Thirdly, incoming partons as a rule are space-like. All the threshold suppression factors of the final-state expressions are therefore irrelevant when production is considered. In sum, the analogy between and is mainly useful as a consistency cross-check, while the two usually are calculated separately. Exceptions include the rather messy loop structure involved in and , which is only coded once. It is of some interest to consider the observable resonance shape when the effects of parton distributions are included. In a hadron collider, to first approximation, parton distributions tend to have a behaviour roughly like for small -- this is why is replaced by in eq. (). Instead, the basic parton-distribution behaviour is shifted into the factor of in the integration phase space , cf. eq. (). When convoluted with the Breit-Wigner shape, two effects appear. One is that the overall resonance is tilted: the low-mass tail is enhanced and the high-mass one suppressed. The other is that an extremely long tail develops on the low-mass side of the resonance: when , eq. () with gives a , which exactly cancels the factor mentioned above. Naïvely, the integral over , , therefore gives a net logarithmic divergence of the resonance shape when . Clearly, it is then necessary to consider the shape of the parton distributions in more detail. At not-too-small , the evolution equations in fact lead to parton distributions more strongly peaked than , typically with , and therefore a divergence like in the cross-section expression. Eventually this divergence is regularized by a closing of the phase space, i.e. that vanishes faster than , and by a less drastic small- parton-distribution behaviour when . The secondary peak at small may give a rather high cross section, which can even rival that of the ordinary peak around the nominal mass. This is the case, for instance, with production. Such a peak has never been observed experimentally, but this is not surprising, since the background from other processes is overwhelming at low . Thus a lepton of one or a few GeV of transverse momentum is far more likely to come from the decay of a charm or bottom hadron than from an extremely off-shell of a mass of a few GeV. When resonance production is studied, it is therefore important to set limits on the mass of the resonance, so as to agree with the experimental definition, at least to first approximation. If not, cross-section information given by the program may be very confusing. Another problem is that often the matrix elements really are valid only in the resonance region. The reason is that one usually includes only the simplest -channel graph in the calculation. It is this signal' graph that has a peak at the position of the resonance, where it (usually) gives much larger cross sections than the other background' graphs. Away from the resonance position, signal' and background' may be of comparable order, or the background' may even dominate. There is a quantum mechanical interference when some of the signal' and background' graphs have the same initial and final state, and this interference may be destructive or constructive. When the interference is non-negligible, it is no longer meaningful to speak of a signal' cross section. As an example, consider the scattering of longitudinal 's, , where the signal' process is -channel exchange of a Higgs. This graph by itself is ill-behaved away from the resonance region. Destructive interference with background' graphs such as -channel exchange of a Higgs and - and -channel exchange of a is required to save unitarity at large energies. In colliders, the parton distribution is peaked at rather than at . The situation therefore is the opposite, if one considers e.g. production in a machine running at energies above : the resonance-peak tail towards lower masses is suppressed and the one towards higher masses enhanced, with a sharp secondary peak at around the nominal energy of the machine. Also in this case, an appropriate definition of cross sections therefore is necessary -- with additional complications due to the interference between and . When other processes are considered, problems of interference with background appears also here. Numerically the problems may be less pressing, however, since the secondary peak is occurring in a high-mass region, rather than in a more complicated low-mass one. Further, in there is little uncertainty from the shape of the parton distributions. In processes where a pair of resonances are produced, e.g. , cross section are almost always given in the zero-width approximation for the resonances. Here two substitutions of the type (92) are used to introduce mass distributions for the two resonance masses, i.e. and . In the formula, is the nominal mass and the actually selected one. The phase-space integral over , and in eq. () is then extended to involve also and . The effects of the mass-dependent width is only partly taken into account, by replacing the nominal masses and in the expression by the actually generated ones (also e.g. in the relation between and ), while the widths are evaluated at the nominal masses. This is the equivalent of a simple replacement of by in the numerator of eq. (), but not in the denominator. In addition, the full threshold dependence of the widths, i.e. the velocity-dependent factors, is not reproduced. There is no particular reason why the full mass-dependence could not be introduced, except for the extra work and time consumption needed for each process. In fact, the matrix elements for several and production processes do contain the full expressions. On the other hand, the matrix elements given in the literature are often valid only when the resonances are almost on the mass shell, since some graphs have been omitted. As an example, the process is dominated by when each of the two lepton pairs is close to in mass, but in general also receives contributions e.g. from , followed by and . The latter contributions are neglected in cross sections given in the zero-width approximation. Widths may induce gauge invariance problems, in particular when the -channel graph interferes with - or -channel ones. Then there may be an imperfect cancellation of contributions at high energies, leading to an incorrect cross section behaviour. The underlying reason is that a Breit-Wigner corresponds to a resummation of terms of different orders in coupling constants, and that therefore effectively the -channel contributions are calculated to higher orders than the - or -channel ones, including interference contributions. A specific example is , where -channel exchange interferes with -channel exchange. In such cases, a fixed width is used in the denominator. One could also introduce procedures whereby the width is made to vanish completely at high energies, and theoretically this is the cleanest, but the fixed-width approach appears good enough in practice. Another gauge invariance issue is when two particles of the same kind are produced in a pair, e.g. . Matrix elements are then often calculated for one common mass, even though in real life the masses . The proper gauge invariant procedure to handle this would be to study the full six-fermion state obtained after the two decays, but that may be overkill if indeed the 's are close to mass shell. Even when only equal-mass matrix elements are available, Breit-Wigners are therefore used to select two separate masses and . From these two masses, an average mass is constructed so that the velocity factor of eq. () is retained, (93) This choice certainly is not unique, but normally should provide a sensible behaviour, also around threshold. Of course, the differential cross section is no longer guaranteed to be gauge invariant when gauge bosons are involved, or positive definite. The program automatically flags the latter situation as unphysical. The approach may well break down when either or both particles are far away from mass shell. Furthermore, the preliminary choice of scattering angle is also retained. Instead of the correct and of eq. (), modified (94) can then be obtained. The , and are now used in the matrix elements to decide whether to retain the event or not. Processes with one final-state resonance and another ordinary final-state product, e.g. , are treated in the same spirit as the processes with two resonances, except that only one mass need be selected according to a Breit-Wigner. Next: Cross-section Calculations Up: Process Generation Previous: Kinematics and Cross Section   Contents Stephen Mrenna 2007-10-30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9587163925170898, "perplexity": 723.7594679702174}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084892892.86/warc/CC-MAIN-20180124010853-20180124030853-00382.warc.gz"}
https://www.mathdoubts.com/evaluate-limit-square-root-1-plus-x-1-divided-by-x-as-tends-to-0-by-rationalization/
# Evaluate $\displaystyle \large \lim_{x\,\to\,0}{\normalsize \dfrac{\sqrt{1+x}-1}{x}}$ by Rationalization The limit of square root of one plus $x$ minus one divided by $x$ is indeterminate as the value of $x$ approaches zero as per the direct substitution method. $\displaystyle \large \lim_{x\,\to\,0}{\normalsize \dfrac{\sqrt{1+x}-1}{x}}$ $\,=\,$ $\dfrac{0}{0}$ It is given in this limit question that the limit of square root of $1$ plus $x$ minus $1$ divided by $x$ should be evaluated by rationalization as the value of $x$ tends to $0$. ### Remove the indeterminate form by Rationalisation An expression in radical form the square root of $1$ plus $x$ is involved in forming the function in the numerator. So, let us try to remove the irrational form of the expression in the numerator by rationalizing it with its conjugate function. $=\,\,$ $\displaystyle \large \lim_{x\,\to\,0}{\normalsize \bigg(\dfrac{\sqrt{1+x}-1}{x}}$ $\times$ $1\bigg)$ $=\,\,$ $\displaystyle \large \lim_{x\,\to\,0}{\normalsize \bigg(\dfrac{\sqrt{1+x}-1}{x}}$ $\times$ $\dfrac{\sqrt{1+x}+1}{\sqrt{1+x}+1}\bigg)$ ### Find the Product by simplifying Rational function Now, it is time to multiply the two rational expressions consisting irrational form expressions by the multiplication rule of fractions. $=\,\,$ $\displaystyle \large \lim_{x\,\to\,0}{\normalsize \dfrac{\big(\sqrt{1+x}-1\big) \times \big(\sqrt{1+x}+1\big)}{x \times \big(\sqrt{1+x}+1\big)}}$ The product of the functions in the numerator can be multiplied by the difference of squares formula. $=\,\,$ $\displaystyle \large \lim_{x\,\to\,0}{\normalsize \dfrac{\big(\sqrt{1+x}\big)^2-1^2}{x\big(\sqrt{1+x}+1\big)}}$ Now, let us focus on simplifying the expressions in both numerator and denominator of the rational function. $=\,\,$ $\displaystyle \large \lim_{x\,\to\,0}{\normalsize \dfrac{1+x-1}{x\big(\sqrt{1+x}+1\big)}}$ $=\,\,$ $\displaystyle \large \lim_{x\,\to\,0}{\normalsize \dfrac{\cancel{1}+x-\cancel{1}}{x\big(\sqrt{1+x}+1\big)}}$ $=\,\,$ $\displaystyle \large \lim_{x\,\to\,0}{\normalsize \dfrac{x}{x\big(\sqrt{1+x}+1\big)}}$ $=\,\,$ $\displaystyle \large \lim_{x\,\to\,0}{\normalsize \dfrac{\cancel{x}}{\cancel{x}\big(\sqrt{1+x}+1\big)}}$ $=\,\,$ $\displaystyle \large \lim_{x\,\to\,0}{\normalsize \dfrac{1}{\sqrt{1+x}+1}}$ ### Evaluate the Limit by Direct substitution Now, let us find the limit of the reciprocal of square root of one plus $x$ plus one by substituting $x$ is equal to zero directly. $=\,\,$ $\dfrac{1}{\sqrt{1+0}+1}$ $=\,\,$ $\dfrac{1}{\sqrt{1}+1}$ $=\,\,$ $\dfrac{1}{1+1}$ $=\,\,$ $\dfrac{1}{2}$ A best free mathematics education website for students, teachers and researchers. ###### Maths Topics Learn each topic of the mathematics easily with understandable proofs and visual animation graphics. ###### Maths Problems Learn how to solve the math problems in different methods with understandable steps and worksheets on every concept for your practice. Learn solutions ###### Subscribe us You can get the latest updates from us by following to our official page of Math Doubts in one of your favourite social media sites. Copyright © 2012 - 2022 Math Doubts, All Rights Reserved
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.608066737651825, "perplexity": 516.7102328524759}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943749.68/warc/CC-MAIN-20230322020215-20230322050215-00016.warc.gz"}
http://openstudy.com/updates/55fe1d9be4b0ed58e2757c00
Here's the question you clicked on: 55 members online • 0 viewing ## unimatix one year ago why are these two equations equal Delete Cancel Submit • This Question is Closed 1. unimatix • one year ago Best Response You've already chosen the best response. 0 $\frac{ u ^{2}}{ 1+u^2 } = 1 - \frac{ 1 }{ 1+u^2 }$ 2. unimatix • one year ago Best Response You've already chosen the best response. 0 basic algebra i'm sure. just drawing a blank here. 3. dan815 • one year ago Best Response You've already chosen the best response. 2 you have to take the common denominator 4. dan815 • one year ago Best Response You've already chosen the best response. 2 just like how you need common denominator to add fractions now your denominator just happens to be some expression instead of a number 5. dan815 • one year ago Best Response You've already chosen the best response. 2 |dw:1442717373661:dw| 6. dan815 • one year ago Best Response You've already chosen the best response. 2 |dw:1442717414388:dw| 7. unimatix • one year ago Best Response You've already chosen the best response. 0 okay I see it now. 8. unimatix • one year ago Best Response You've already chosen the best response. 0 thanks! 9. dan815 • one year ago Best Response You've already chosen the best response. 2 welcome 10. Not the answer you are looking for? Search for more explanations. • Attachments: Find more explanations on OpenStudy ##### spraguer (Moderator) 5→ View Detailed Profile 23 • Teamwork 19 Teammate • Problem Solving 19 Hero • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999552965164185, "perplexity": 15624.455933717209}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988722459.85/warc/CC-MAIN-20161020183842-00492-ip-10-171-6-4.ec2.internal.warc.gz"}
http://www.computer.org/csdl/trans/td/2001/06/l0558-abs.html
Subscribe Issue No.06 - June (2001 vol.12) pp: 558-566 ABSTRACT <p><b>Abstract</b>—This paper studies a fundamental problem, the <it>termination detection</it> problem, in distributed systems. Under a wireless network environment, we show how to handle the <it>host mobility</it> and <it>disconnection</it> problems. In particular, when some distributed processes are temporarily disconnected, we show how to capture a <it>weakly terminated state</it> where silence has been reached only by those currently connected processes. A user may desire to know such a state to tell whether the mobile distributed system is still running or is silent because some processes are disconnected. Our protocol tries to exploit the network hierarchy by combining two existing protocols together. It employs the <it>weight-throwing scheme</it> [<ref rid="bibL05589" type="bib">9</ref>], [<ref rid="bibL055816" type="bib">16</ref>], [<ref rid="bibL055821" type="bib">21</ref>] on the wired network side, and the <it>diffusion-based scheme</it> [<ref rid="bibL05585" type="bib">5</ref>], [<ref rid="bibL055813" type="bib">13</ref>] on each wireless cell. Such a hybrid protocol can better pave the gaps of computation and communication capability between static and mobile hosts, thus more scalable to larger distributed systems. Analysis and simulation results are also presented.</p> INDEX TERMS Distributed computing, distributed protocol, mobile computing, operating system, termination detection, wireless network. CITATION Yu-Chee Tseng, Cheng-Chung Tan, "Termination Detection Protocols for Mobile Distributed Systems", IEEE Transactions on Parallel & Distributed Systems, vol.12, no. 6, pp. 558-566, June 2001, doi:10.1109/71.932710
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8689716458320618, "perplexity": 7484.26723592939}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065324.41/warc/CC-MAIN-20150827025425-00165-ip-10-171-96-226.ec2.internal.warc.gz"}
https://blog.csdn.net/jwmneu/article/details/7450793
# Palindrome POJ 1159 动态规划 Description A palindrome is a symmetrical string, that is, a string read identically from left to right as well as from right to left. You are to write a program which, given a string, determines the minimal number of characters to be inserted into the string in order to obtain a palindrome. As an example, by inserting 2 characters, the string "Ab3bd" can be transformed into a palindrome ("dAb3bAd" or "Adb3bdA"). However, inserting fewer than 2 characters does not produce a palindrome. Input Your program is to read from standard input. The first line contains one integer: the length of the input string N, 3 <= N <= 5000. The second line contains one string with length N. The string is formed from uppercase letters from 'A' to 'Z', lowercase letters from 'a' to 'z' and digits from '0' to '9'. Uppercase and lowercase letters are to be considered distinct. Output Your program is to write to standard output. The first line contains one integer, which is the desired minimal number. Sample Input 5 Ab3bd Sample Output 2 #include "stdio.h" const int N=5010; char s[N]; int c[N][N]={0}; int n; int min(int a,int b) { if(a<b) return a; else return b; } int Dp(int i,int j) { if(i==j) return 0; if(i>j) return 0; if(c[i][j]!=0) return c[i][j]; if(s[i]==s[j]) c[i][j]=Dp(i+1,j-1); else c[i][j]=min(Dp(i+1,j),Dp(i,j-1))+1; return c[i][j]; } int main() { scanf("%d",&n); scanf("\n%s",s+1); printf("%d\n",Dp(1,n)); return 0; } • 本文已收录于以下专栏: 举报原因: 您举报文章:Palindrome POJ 1159 动态规划 色情 政治 抄袭 广告 招聘 骂人 其他 (最多只允许输入30个字)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1814536452293396, "perplexity": 3645.532681054035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257650730.61/warc/CC-MAIN-20180324151847-20180324171847-00748.warc.gz"}
https://encyclopedia.pub/10918
1. Please check and comment entries here. Topic review # Clinoptilolite Characterization and EDS Analysis View times: 54 ## Definition Zeolites are materials of biomedical interest, in particular owing to their ability to remove metabolic products such as uremic toxins (i.e., urea, uric acid, creatinine, p-cresol, and indoxul sulfate); they are used for the regeneration of dialysis solutions and as in vivo membranes for artificial kidney. Zeolites have further important applications in the biomedical field, in fact they are used as hemostats (due to their ability to absorb water), antiseptics (when modified with silver or zinc ions), carriers for drugs and genes (adjuvant in vaccines), glucose absorbers, etc. Here, EDS microanalysis in the study of a sample of natural clinoptilolite is reported. ## 1. Determination of the Si/Al molar ratio A very important characteristic parameter for zeolites is the atomic[1] (or molar) ratio of the silicon and aluminium elements (Si/Al) contained in them. According to Lowenstein's rule[2], this ratio always assumes numerical values greater than 1 (the rule says that an AlO4 tetrahedron is unlikely to bind another AlO4 tetrahedron). When the ratio Si/Al is equal to 1, the tetrahedra of Si and those of Al regularly alternate to build an ordered structure. The Si/Al ratio varies from 1 to 7 for natural zeolites and varies from 1 to infinity for the synthetic ones. In general, zeolites are classified on the basis of the numerical value of this Si/Al atomic ratio and distinguished in highly-siliceous zeolites, when the ratio is greater than 5 (highly-siliceous zeolites are nonpolar and therefore have little affinity with water), and in aluminous zeolites, when this ratio is less than 5 (these minerals are polar and very compatible with water). The affinity of the zeolites with water depends on the concentration of the hydrophilic sites (cationic and external hydroxyl sites) present in them and these sites are with good approximation numerically equal to that of the aluminum atoms (the concentration of the external hydroxyls is negligible). Many physico-chemical properties of zeolites depend on this ratio. For example, the electrical conductivity and ion exchange capacity of zeolites are closely related to the Si/Al ratio and both properties improve as this ratio decreases. The atomic Si/Al ratio is generally determined by using the elementary chemical analysis of zeolite samples, which is a destructive, generally time consuming, and laborious procedure. The EDS microanalysis is carried out by measuring the energy and intensity distribution of the X-rays generated by the action of the primary electron beam on the sample, by using an energy dispersion detector (single crystal of silicon doped with lithium). It represents a rapid, non-destructive analytical method to evaluate the atomic Si/Al ratio of a zeolite sample. Thin slices of clinoptilolite were produced by cutting the raw stone with a diamond saw (electric mini-drill). The data generated by the EDS analysis consisted of spectra containing peaks that corresponded to the different elements present in the sample. This technique combines: the morphological information offered by the SEM microscope with the qualitative and semi-quantitative compositional information offered by the X-rays acting on the section of the observed samples. The samples were not metallized with Au/Pd alloy for avoiding to mask lighter elements and samples were observed in low vacuum mode by a SEM microscope (FEI Quanta 200 FEG), equipped with an EDS energy-dispersive spectrometer (Oxford Inca Energy System 250 equipped with INCAx-act LN2-free detector). The EDS analysis was conducted on several samples of natural clinoptilolite and several points of different areas have been analyzed for each of them. The investigated area was about 900 µm2 (see Figure 1). Fig. 1 - Natural clinoptilolite sample analyzed by the EDS technique. According to the EDS technique, the atomic percentage of silicon in the zeolite was on average equal to 22.90% and the average atomic percentage of aluminum was equal to 4.25%, the ratio of these atomic percentages provides a value for the atomic Si/Al ratio corresponding to 5.39, which exactly matches the Si/Al value of natural clinoptilolite (a highly siliceous zeolite). Table I summarizes the results of the EDS analysis conducted on a single sample of natural clinoptilolite measured in three different points. at.% at.% at.% Average values Si 23.38 22.75 22.58 22.90 Al 4.27 4.27 4.22 4.25 Si/Al 5.475 5.328 5.351 5.385 Tab. 1 - Atomic percentages of Si and Al and atomic Si/Al ratio for the sample of natural clinoptilolite. ## 2. Determination of the nature and concentrations of extra-framework cations The crystallochemical variability of zeolites and consequently their technological applications depend not only on the atomic Si/Al ratio but also on the type of cations present in the structure. Usually, these cations are alkali or alkaline-earth metals[3], which are present in the channels of the mineral depending on their radius and charge (for example, clinoptilolite readily accepts Cs+ ions by on exchange mechanism). The type of extra-framework cations and their molar or weight percentage in the mineral can also be obtained quickly and accurately by EDS analysis. As visible in the spectrum given in Figure 2, four different types of cations were present in the natural clinoptilolite sample, namely: potassium, calcium, iron, and magnesium. The intensities of the signals of these ions were quite different and, in particular, calcium and potassium were more abundant, while magnesium and iron were present at trace level. The average values of the percentages for these elements are reported in Tables II and III. On basis of these results, the investigated zeolite sample corresponded to K-type clinoptilolite (generally referred to as: clinoptilolite-K). Iron is a typical impurity that is frequently found in zeolites of natural origin. Fig. 2 - EDS spectrum of the natural clinoptilolite sample (top) and classification of the three forms of natural clinoptilolite (bottom). Cation Area 1 Area 2 Area 3 Average value K 1.55 1.49 1.43 1.49 Ca 1.03 1.01 0.97 1.00 Mg 0.34 0.40 0.41 0.38 Fe 0.41 0.38 0.35 0.38 Tab.2 - Atomic/molar percentages of extra-framework cations present in the sample of natural clinoptilolite. Cation Area 1 Area 2 Area 3 Average value K 3.02 2.92 2.80 2.91 Ca 2.06 2.02 1.95 2.01 Mg 0.41 0.48 0.50 0.46 Fe 1.14 1.05 0.97 1.05 Tab. 3 - Percentages by weight of extra-framework cations present in the sample of natural clinoptilolite. ## 3. Stoichiometric verification of the mineral chemical formula Obviously, the EDS spectrum is completed by the presence of the oxygen fluorescence signal, which represents the most abundant element contained in the silicoaluminate compound (the average oxygen concentration calculated by EDS was ca. 69.48at.%, 55.55% by weight). This signal was generated both by oxygen bonded to silicon and by oxygen bonded to aluminum. As can be easily calculated by using the data in Table I, due to the presence of crystallization water in the mineral, the O/(Si+Al) atomic ratio was about 2.6. These experimental data can be compared with the theoretical values that can be calculated from the chemical formula of the mineral. According to the chemical formula of a typical clinoptilolite, that is: (Na2,K2,Ca)3Al6Si30O72.24H2O), in the mineral there are 30 silicon atoms, 6 aluminum atoms and 96 oxygen atoms, therefore the O/(Si+Al) ratio corresponds to 96/(15+5) = 2.67 and this value is in perfect agreement with the experimental data obtained by EDS, thus proving that the mineral is clinoptilolite. As shown in Figure 3, a diagram (histogram) of all EDS data also allows an immediate displaying of the compound composition. Fig. 3 - Pie-diagram built with EDS data. This diagram allows to immediatly display the relative abundance of the elements in the compound. ## 4. Information on zeolites modified by ion exchange After chemical modification of the zeolites (e.g., ion exchange, treatment with surfactants, etc.), the EDS allows to verify the effectiveness of the performed treatment. For example, when a new type of cation has been inserted into the zeolite crystal lattice by using the ion exchange method, the EDS technique allows to quickly evaluate the obtained result. In the case of K-clinoptilolite, the sodium cation (Na+) is not originally present in the mineral, but after that the mineral has contacted a boiling aqueous solution of sodium chloride (NaCl) for approximately 20min; after repeated washing with hot tap water, the EDS analysis showed the presence of this element (sodium) as well as a greater amount of magnesium (see Figure 4). In particular, the quantity of sodium introduced into the crystal lattice of natural clinoptilolite corresponded to 1.86at.% (2.16% by weight in the first point), while in the second point corresponded to 2.17at.% (2.52% by weight). As can be verified from the overall EDS data reported in Table IV, as a consequence of the ion exchange with a concentrated solution of sodium chloride in tap water, the content of sodium and magnesium ions increased, while the concentration of potassium and calcium decreased. The concentration of the elements belonging to the framework (i.e., silicon, aluminum and oxygen), which are not involved in the ion exchange process, and that of iron, which is a trivalent cation (Fe3+) and therefore it is hardly exchanged by monovalent ions, due to the considerable strength of electrostatic interaction with the negative charges of the framework, remained practically constant. Fig. 4 - EDS spectra of the sample of natural clinoptilolite treated at 100°C with a concentrated aqueous solution of NaCl (and then hot washed repeatedly) and SEM micrographs of the areas where the EDS analysis was carried out. Element Before treatment After treatment Na - 1.86 Mg 0.30 1.81 K 1.74 0.75 Ca 1.15 0.39 Fe 0.49 0.37 Si 24.48 21.09 Al 4.37 4.35 O 67.47 69.33 Tab. IV - Comparison between the atomic percentages of the elements before and after the ion exchange treatment. ## 5. Conclusions Finally, according to the results given in this short technical report, the characterization of clinoptilolite and other zeolites by energy dispersive X-ray microanalysis (EDS) combined with SEM represents an extremely powerful approach, which is also fast and easy to use. ## References 1. 1. Mohau Moshoeshoe; A Review of the Chemistry, Structure, Properties and Applications of Zeolites. American Journal of Materials Science 2017, 7, 196-221, 10.5923/j.materials.20170705.12. 2. Christopher J. Heard; The effect of water on the validity of Löwenstein's rule. Chem. Sci. 2019, 10, 5705-5711, 10.1039/C9SC00725C. 3. D.A.Kennedy; Cation exchange modification of clinoptilolite–Screening analysis forpotential equilibrium and kinetic adsorption separations involving methane,nitrogen, and carbon dioxide. Microporous and Mesoporous Materials 2018, 262, 235-250, https://doi.org/10.1016/j.micromeso.2017.11.054. More
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8148811459541321, "perplexity": 2523.453500997264}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585183.47/warc/CC-MAIN-20211017210244-20211018000244-00321.warc.gz"}
https://www.arxiv-vanity.com/papers/1608.02881/
# A badly expanding set on the 2-torus Rene Rühr March 14, 2021 ###### Abstract. We give a counterexample to a conjecture stated in [LL06] regarding expansion on under and . Let be the set containing the linear transformation and its transpose , and let , adding the inverses of and . Using these transformations, Linial and London [LL06] studied an infinite -regular expander graph, showing the following expansion property: For any bounded measurable set of the plane, one has (1) m⎛⎝A∪⋃σ∈ΣUσ(A)⎞⎠≥2m(A) and m⎛⎝A∪⋃σ∈ΣDσ(A)⎞⎠≥43m(A) where denotes the Lebesgue measure of a set and the bounds are sharp. Note that and thus its elements also act on , and this action is measure preserving with respect to the induced probability measure on . It was conjectured in [LL06] and in [HLW06][Conjecture 4.5] that there is a constant such that for with the estimate of line (1) with in place of holds. Below we give a simple counterexample to this conjecture. Let denote the natural projection map. Let and define CU=π({(x,y)∈R2:|x|≤ε or |y|≤ε}). and CD=CU∪π({(x,y)∈R2:|x+y|≤ε}). These sets are of arbitrary small measure as and satisfy and . ###### Proof. The following picture depicts the set in red and the image under in blue. We note that the overlapping triangles outside the square are to be seen modulo , thus wrap up and do not amount to additional mass.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9975262880325317, "perplexity": 558.8323289963137}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488519735.70/warc/CC-MAIN-20210622190124-20210622220124-00554.warc.gz"}
http://algorist.com/algowiki/index.php?title=TADM2E_4.45&direction=next&oldid=359&printable=yes
Let's use the example given, for words say A, B, C, in the problem but not get too tied to the specific values. It helps to think about sorting the search string words by their indexes in the document: A C B A A C * * B B * * * * C 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 In the code example this is just a call to sort but since each word index list is sorted we could use MergeSort which means this portion of the code is $\mathcal{O}(n)$ for $n$ indexes. Once we have the list we push each element into a hash with a separate key for each word and update our estimate of the snippet span that includes all the words. Here's python code for one simple implementation. import sys def smallest_snippet(*args): ''' args -- K lists of word positions for K words >>> smallest_snippet([1,10], [2,20], [3,30]) [1, 3] >>> smallest_snippet([1,9,27], [6,10,19], [8,12,14]) [8, 10] >>> smallest_snippet([1,4,11,27], [3,6,10,19], [5,8,12,14]) [3, 5] >>> smallest_snippet([1,4,5], [3,9,10], [2,6,15]) [1, 3] ''' master = [] for i in range(len(args)): master.extend(map(lambda j: (i,j), args[i])) # ith word, jth index master.sort(lambda x,y: cmp(x[1], y[1])) # TODO: mergesort tops = {} # { word i: index j } best = [master[0][1], master[-1][1]] minspan = best[-1] - best[0] + 1 # update span after each new index tuple for (i,j) in master: tops[i] = j if len(tops) == len(args): curr = [min(tops.values()), max(tops.values())] if curr[1] - curr[0] < minspan: minspan = curr[1] - curr[0] best = curr return best if __name__ == "__main__": import doctest doctest.testmod() sys.exit() Solution 2: We can see from inspection that looking at pairwise combinations of indices tells us nothing. So we know 3 indices (one from each list) must be looked at in combination. We can also observe that the correct answer may involve indices at any position in the three lists (e.g., the solution may involve the final entry in each list). Let's call the three lists of positions/indices A, B, and C. To begin, consider an abstract triplet <Ai, Bj, Ck> where Ai < Bj < Ck. For example, <1, 3, 5>. Incrementing j to j+1 can only result in an equally good (e.g., <1, 4, 5>) or worse (<1, 6, 5>) solution. Similarly, incrementing k to k+1 can only result in a worse solution (e.g., <1, 3, 7>). Therefore, incrementing i to i+1 provides the only possibility of finding a better solution (e.g., <2, 3, 5>), but may also produce a worse solution (<8, 3, 5>). So, if we start with the triplet formed by the first element in each of the three lists, calculate the word span, and then loop by incrementing the index of the word position that occurs earliest in the text, we can proceed orderly though the lists of word positions, ignoring only combinations that are guaranteed not to produce a better solution. Note that this approach also allows us to terminate the search early in the case where the triplet element representing the word with earliest occurrence in text (i.e., that which we would next increment) is the final entry in one of the word position lists (as we have shown above that all remaining unexplored combinations must have a greater span). Applying this algorithm to the example given in the text results in combinations being tested in the following order: <1, 3, 2>: 1-3* <4, 3, 2>: 2-4 <4, 3, 6>: 3-6 <4, 9, 6>: 4-9 <5, 9, 6>: 6-9 The algorithm terminates here, as 5 is the lowest value and the word1 index list has no more elements). The answer is the span 1-3, which we find by examining only 5 possible combinations. The algorithm executes in linear time, O(n) = O(|A| + |B| + |C|), and can be easily extended to an arbitrary number of words.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49277642369270325, "perplexity": 2127.60169957635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655889877.72/warc/CC-MAIN-20200705215728-20200706005728-00555.warc.gz"}
https://www.nature.com/articles/s41467-020-16667-x?error=cookies_not_supported
## Introduction Chemistry can be broadly defined as the change in valence electronic structure as atoms in a molecule geometrically rearrange. The adiabatic picture that describes this delicate interplay between electrons and nuclei is a central pillar to chemical dynamics and leads to the concept of a potential energy surface within the Born-Oppenheimer approximation1. Consequently, developing experimental probes that are sensitive to both nuclear and electronic evolution in real-time has become a primary goal in chemical physics2,3,4,5,6. Time-resolved photoelectron spectroscopy is one such method as it maps an adiabatically evolving wavepacket onto ionised final states with no strict selection rules7,8. A variant of the method, time-resolved photoelectron imaging, offers an additional dimension through the photoelectron angular distributions (PADs) that are sensitive to the molecular orbital from which the electron is removed9. Adiabatic changes can be tracked through molecular-frame PADs, but such measurements require a connection between the laboratory and molecular frames of reference, either through coincidence measurements2,5 or molecular alignment10. This complexity inhibits the application of such experiments to complex polyatomic molecules for which the methods are ultimately designed. Overcoming these limitations will provide a platform to probe chemical dynamics in complex molecules. The photoactive yellow protein, PYP, is a blue-light absorbing protein that has been an important testbed for novel structural probes in complex biological environments11,12. The absorption maximum for the S1 ← S0 transition in PYP is at ~450 nm and can be traced to a small chromophore that undergoes a light-activated transcis isomerisation which serves as a mechanical lever and triggers an extensive bio-cycle with numerous intermediates13,14,15. Derivatives of the PYP chromophore are commonly based on para-coumaric acid and have been studied extensively as a prototypical bio-chromophore16,17. Yet, there remains ambiguity about which specific bonds are involved in the initial excited state isomerisation and, hence, there is a desire to develop experimental probes that can distinguish subtly different reaction coordinates. For example, the anionic para-coumaric ketone chromophore (pCK, Fig. 1a), studied by Zewail and coworkers using time-resolved photoelectron spectroscopy, can isomerise about the first (single), the second (double), or both bonds in the para-position; but the photoelectron spectra alone could not discern these differences16. Chemical derivatives in which rotation about specific bonds is inhibited have also been studied, but such modifications diverge further from the native chromophore18. Several computational studies have explored the potential energy surfaces of the S0 and S1 states and considered the dynamics on the S1 state following photoexcitation19,20,21,22,23. These have converged on the position that the initial isomerisation coordinate involves predominantly rotation about the single bond, but have not been clearly linked with experimental data. In the present study, pCK is probed by time-resolved photoelectron imaging combined with electronic structure calculations. For specific photoelectron features, the temporal evolution of the spectra and laboratory-frame PADs, in unison with our calculations, enables the identification of the nuclear and electronic structural changes associated with the single-bond isomerisation coordinate on the excited state, thus demonstrating a direct probe for adiabatic dynamics, i.e. chemistry. ## Results ### Time-resolved photoelectron imaging Our experiment involves excitation of mass-selected pCK at 2.79 eV (444 nm) with femtosecond pulses to the bright S1 state. The excited state dynamics are subsequently probed at various delays using femtosecond pulses at 1.55 eV (800 nm) at the centre of a velocity map imaging spectrometer, yielding time-resolved photoelectron images. The temporal resolution (full width at half maximum) is 100 ± 10 fs and the spectral resolution ~5% of the electron kinetic energy, ε. The time-resolved photoelectron spectra are shown in Fig. 1b over the first picosecond following excitation. Each spectrum has had the spectrum at t < 0 removed to leave only the time-evolving signals. Figure 1e displays spectra at a few specific delays. At very early times, the photoelectron spectrum exhibits a peak centred at an electron kinetic energy, ε ~ 1.4 eV. With increasing pump-probe delay, t, this initial peak shifts towards lower ε, leaving a peak centred at ε ~ 0.8 eV at longer times (t >> 1 ps). This peak then slowly decays with a lifetime of ~120 ps with no further spectral evolution (see Supplementary Fig. 1). Based on our photoelectron spectra (Supplementary Note 2 and Supplementary Fig. 2), the electron affinity of pCK is 2.87 ± 0.05 eV. Hence, excitation to the S1 state at 2.79 eV is just below the detachment threshold and ensures we are predominantly probing bound-state dynamics, although the S1 absorption profile is broad and extends into the continuum24. Fig. 1d presents the integrated photoelectron signal over specific spectral windows that are indicated in Fig. 1b (ε1, ε2, and ε3) and are representative of the different spectral features. The high energy peak (represented by spectral region ε1 in Fig. 1b) shows a rapid initial decay with an apparent oscillation superimposed. The signal in the intermediate spectral range (ε2) rises as the high energy signal decays and similarly oscillates with a commensurate period, but with a π phase shift. These dynamics are also clearly visible in Fig. 1b. In addition to the evolution of the photoelectron spectra, we also observe an evolution of the PADs. Laboratory-frame PADs are typically quantified by anisotropy parameters, β2n(ε)25. For a two-photon (pump-probe) process, n = 1 and 2, and the PADs are defined through9,26 $$I\left( {\varepsilon ,\theta } \right) = \sigma /4\pi \left[ {1 + \beta _2\left( \varepsilon \right)P_2\left( {{\mathrm{cos}}\theta } \right) + \beta _4\left( \varepsilon \right)P_4\left( {{\mathrm{cos}}\theta } \right)} \right],$$ (1) where I(ε, θ) is the photoelectron signal as a function of the angle, θ, between the laser polarisation axis and the photoelectron emission velocity vector, σ is the detachment cross-section, and P2(cosθ) and P4(cosθ) are the second- and fourth-order Legendre polynomials. For large polyatomic molecules, only changes in β2 are often significant, which has limiting values +2 and −1 that correspond to electron emission predominantly parallel to and perpendicular to the polarisation axis, respectively. Figure 1c shows the measured β2(ε, t), with a 5-point moving average in ε. Figure 1c is directly comparable to the spectral evolution shown in Fig. 1b. Note that when the overall photoelectron signal is low, the determination of β2(ε, t) has a large uncertainty and we omit data for signal that is less than 0.1 of the normalised signal in Fig. 1b for clarity. The corresponding β4(ε, t) data are given in the Supplementary Note 3 and Supplementary Fig. 3 and have values very close to zero suggesting that β2(ε, t) is a good measure of the overall PADs. Figure 1f shows the β2(ε) with no moving-average applied at two delays, t = 0 and t = 1 ps, with the corresponding spectra shown in Fig. 1e. To determine a specific anisotropy for a given feature, β2(ε) has been averaged over the spectral features as shown by the shaded regions in Fig. 1e. This yields values of β2 = −0.36 ± 0.09 and β2 = −0.11 ± 0.12 for the initial photoelectron peak at centred at ε ~1.4 eV (ε1) and the lower energy peak centred at ε ~0.8 eV (ε2), respectively. ### Assignment of photoelectron features There are two dominant pathways discussed for the initial S1 state dynamics of PYP chromophores19,20,22. The ground state of pCK is planar because of the π-conjugation over the para-substituent on the phenolate anion. Upon excitation to the S1 state, an electron populates a molecular orbital with π* character, weakening the corresponding π-conjugation of the molecule and facilitating rotation about the bonds. Following S1 photoexcitation, the molecule first rapidly relaxes to a local planar minimum with a geometry that is very similar to the Franck-Condon geometry (i.e. S0 minimum). From this planar S1 minimum, rotation about either the single bond, φSB, or the double bond, φDB, can occur as shown in Fig. 1a. Figure 2 shows the relevant potential energy surfaces that have been calculated using high-level multireference methods along two different pathways connecting the S1 planar minimum (PM) with the two minima on the S1 surface. These two minima arise from rotation around φSB or φDB and their geometries are denoted as SB and DB, respectively, as shown inset in Fig. 2a. The calculated pathways connect the different minima via a linear interpolation in internal coordinates (LIIC) and as such account for the geometrical changes along the different points of the S1 potential energy surface. While other nuclear displacements take place, the motion along either pathway is dominated by the rotations φSB or φDB. Motion along the φDB involves a barrier, while that along the φSB coordinate is essentially barrierless. Our calculations are in reasonable agreement with previous theoretical work, which suggested that φSB rotation is more probable and also found a barrier along φDB for related chromophores20,22. The main differences with previous theoretical works arise from the levels of theory used. Here, our main goal is to treat the S1 excited state on the same footing as the D0 final state to offer the most reliable energies to compare with the photoelectron spectra that are measured in the experiment. The vertical excitation energies of the S0, S1 and D0 states are obtained from the same XMCQDPT2 calculation (see Computational Details for more information). The photoelectron spectrum is determined by the difference in energy between the anionic (S1) and neutral (D0) states, as shown in Fig. 2a. Based on the calculated values, detachment of the Franck-Condon geometry (denoted FC in Fig. 2a) with a hv = 1.55 eV probe will lead to electron signal extending to εFC = 1.40 eV. The rapid initial relaxation to the S1 planar minimum (PM) reduces this limit slightly to εPM = 1.34 eV. Rotation about φSB and φDB leading to the S1 minima SB and DB is expected to lead to photoelectron signal extending to εSB = 0.87 eV and εDB = 0.21 eV. These limits are shown in Fig. 1e. It is important to note that the estimated uncertainty in the calculations is ~0.2 eV and that only the molecular geometry at each critical point was used to determine these energies. Additionally, the maxima of the photoelectron peaks are expected to be shifted to slightly lower energy compared to the predicted maximum kinetic energy because the potential energy at the minima is lower than at the initial excitation energy. For example, the predicted maximum signal for rotation about φSB will occur in the range 0.43 < εSB < 0.87 eV. Hence, the calculated maximal values should be used as a guide only. Nevertheless, based on the potential energy surfaces in Fig. 2a, the agreement of the peak at t = 0 with the expected energy for the Franck-Condon (and S1 planar minimum) geometry is excellent. At the later time of t = 1 ps, the broad peak centred at ε ~ 0.8 eV is consistent with a twisted intermediate that has undergone rotation about φSB. This peak is not consistent with rotation about φDB as the spectral maximum of DB is expected at ε < 0.21 eV. Hence, based solely on energetic arguments, the dynamics involving the peaks at ε1 ~ 1.4 eV and ε2 ~ 0.8 eV correspond to dynamics involving rotation about the single bond. Rotation about specific bonds also leads to differing electronic structures: adiabatically, a change in nuclear configuration is associated with an instantaneous adaptation of the underlying electronic structure. That is to say, the character of the valence orbitals at a given molecular geometry should be reflected in the laboratory-frame PADs and these may be expected to be different for the two different isomerisation pathways. Such changes can be quantitatively analysed by computing the Dyson orbital, ΨD, for the key structures along the reaction coordinate. The Dyson orbital can be thought of as the one-electron wavefunction describing the electron that is being photodetached. Krylov and coworkers have shown that PADs can be conveniently calculated from ΨD yielding computed β2(ε) trends27,28. We have previously shown that computed β2(ε) are in satisfactory agreement with experimental ones for several molecular anions in their ground state, including para-substituted phenolate anions, which pCK is a derivative of29,30. Moreover, we showed that PADs are also sensitive to subtly differing electronic structure when a short alkyl chain (ethyl) lies either in the plane of the phenolate ring or perpendicular to it30. We have now extended these calculations to predict the β2(ε) for detachment from the S1 excited state of pCK. Figure 2b shows ΨD for key critical geometries: the Franck-Condon geometry, ΨD(FC), and the two S1 minima associated with a rotation about φSB and φDB: ΨD(SB) and ΨD(DB). Laboratory-frame PADs were calculated based on these ΨD, with the neutral D0 ground state as the final state. The computed β2 values can be directly compared to the measured values (Fig. 1f). The simplest comparison can be done by averaging the computed β2 values over the same energy range as for the experimental results. This yielded computed anisotropy parameters for key geometries of β2 = −0.48 (FC), β2 = −0.40 (PM), β2 = −0.19 (SB) and β2 = +0.04 (DB). These can be directly compared with experimental values of β2 = −0.36 and β2 = −0.11 for the initial peak at t = 0 and the peak at t = 1 ps. Such a comparison suggests that the signal in ε1 arises from FC and PM, while the signal in ε2 arises from SB. A more useful comparison is based on the trends of β2(ε). From Fig. 3a, the measured β2(ε) for the peak at t = 0 is in reasonable quantitative agreement with the β2(ε) computed from ΨD(FC) and ΨD(PM). From Fig. 3b, β2(ε) for the photoelectron peak at t = 1 ps is in reasonable quantitative agreement with β2(ε) computed from ΨD(SB). In contrast, the agreement for this feature with β2(ε) computed from ΨD(DB) is poor and qualitatively has the wrong sign and trend. Other points along the φDB coordinate, including at the barrier, yielded predominantly positive β2(ε) values, similar to that predicted from ΨD(DB), and thus also qualitatively different to the observed experimental trends. We conclude that the signal in the ε2 spectral range is a direct measure of the single bond pathway rather than the double bond one upon photoexcitation to S1 of pCK and that the dynamical changes between ε1 and ε2 reflect adiabatic motion between the FC/PM region to the SB minimum on the S1 excited state. This conclusion is consistent with the energetic arguments made earlier. Overall, the agreement between predicted and measured β2(ε) is almost quantitative, especially given that these are based on single geometries that do not account for other nuclear motions (either thermal or photoinduced), which will tend to make the PADs more isotropic. Moreover, the calculation of the PADs employs some key approximations. In particular, the outgoing wave is treated as a plane wave and thus assumes no interaction of the photoelectron with the neutral core. In the present case, this may be a poor approximation because the neutral pCK core has a large permanent dipole moment. Despite these limitations, the agreement is very good, especially in terms of the trends of β2 with ε as see in Fig. 3. Inspection of ΨD in Fig. 2b provides intuitive chemical insight about how the PADs reflect the changes in electronic structure along the isomerisation coordinate. Specifically, we have previously used a simple Hückel model to interpret changes in the excited state energies and character for a series of para-substituted phenolate anions29. As pCK belongs to this family, similar arguments apply here. The S1 and S2 states can be considered as linear combinations of molecular orbitals localised on the phenolate ring and the π-conjugated para-substituent. From Fig. 2b, rotation about φSB leads to a localisation of ΨD(SB) onto the π-conjugated substituent. Locally, ΨD(SB) is therefore associated with a planar π-conjugated system and this is expected to lead to β2 < 0, similar to that predicted for ΨD(FC)26. In contrast, following the rotation about φDB, ΨD(DB) becomes delocalised over a non-planar moiety. Such a molecular orbital is expected to yield β2 ~ 0, as previously seen in the ground electronic state of para-ethylphenolate30. Hence, despite the complex nature of lab-frame PADs, simple arguments provide an intuitive view of the electronic structure changes associated with the isomerisation coordinate. Hence, without the need to perform high-level calculations, the observed PADs can provide qualitative insight into the changes in valence-bonding along the isomerisation coordinate. ## Discussion Based on the spectral and angular distributions, the photoelectron signal at ε1 is assigned to the signature of FC and PM and that at ε2 to the twisted intermediate following rotation about the single bond, SB. The dynamics associated with this evolution is shown in Fig. 1d. The coherence observed shows a nuclear wavepacket moving on the excited state surface from the S1 planar minimum past the SB minimum and back again with a period of ~400 fs. Note that the vibrational modes that comprise this wavepacket are not necessarily the Franck-Condon active modes. The dominant FC modes are likely to stretch the C–C bonds as the excitation involves a π* ← π transition. These are high frequency modes that lead to very rapid dynamics from the FC towards the S1 minimum. This motion then evolves into the modes that lead to isomerisation. The observed oscillation is in agreement with excited state molecular dynamics simulations of pCK that have predicted a similar oscillation20. Only a single oscillation is observed, presumably as a result of the dephasing to other modes (i.e. internal vibrational energy redistribution). The time-resolved photoelectron spectroscopy experiment by Zewail and coworkers similarly noted energetic shifts and associated dynamics, following photoexcitation at hv = 3.10 eV16. Excitation at 3.10 eV is above the adiabatic detachment energy and probably also above the barrier to double bond rotation. In their experiments, autodetachment from the S1 state (characterised by electrons at low ε) was a prominent feature, which could swamp any signatures of the dynamics associated with double bond rotation that might have been occurring. We also observe a very small fraction of autodetachment (4%), enabled by the finite temperature (~300 K) and the spectral width of the pump pulse. Additionally, Zewail and coworkers observed an oscillation in the high kinetic energy window, similar to that observed in ε1, but not the out-of-phase oscillation at lower energy (ε2), probably because of contamination by autodetachment16. Dynamics involving isomerisation were also observed in a recent study on a closely related PYP chromophore anion in which the ketone is replaced by an ester group31. These dynamics were in competition with internal conversion to a non-valence state of the anion. Such dynamics are not observed here highlighting that even small chemical changes can have a marked impact on the excited state dynamics. Finally, Fig. 1b and e shows a peak at very low ε (ε3) in the time-resolved photoelectron spectra. This spectral peak could arise from double bond rotation. The maximum expected energy for photodetachment from DB, εDB = 0.21 eV, which would be consistent with this peak. It is not informative to analyse the PADs for this channel because they are at too low kinetic energy, where the PADs are generally expected to be isotropic. However, a number of observations may suggest a different origin of the ε3 signal. Firstly, the formation of the DB minimum involves motion along the φDB coordinate (Fig. 2a) and should lead to photoelectron signals that evolve continuously from FC/PM to the DB minimum; but this is not observed. Secondly, the oscillation frequency of the integrated signal in ε2 and ε3 is essentially identical (Fig. 1d); one might expect that the period of motion to differ slightly between the two coordinates. Thirdly, if this signal was attributed to DB rotation, then the minimum of the photoelectron signal in ε3 would arise because the probe photon energy was insufficient to access the final neutral state (D0)32. In that case, the oscillation should be observable in the total photoelectron signal, but no such changes are seen (Supplementary Fig. 4). Instead, we suspect that the signal in ε3 comes about because the probe can access the first excited state of the neutral, D1. This excited state can be seen in the photoelectron spectrum at higher photon energies (see Supplementary Fig. 2). According to our calculations, the vertical energy difference between the SB intermediate on S1 and the D1 is 1.3 eV, suggesting that it could be accessed with the 1.55 eV probe. Nevertheless, the assignment of this feature remains somewhat uncertain and we cannot exclude that concomitant dynamics about φDB are taking place on the S1 state over the first picosecond. It would be useful to probe the dynamics with a higher energy photon. However, this comes with added complications of possible excitations from the S1 to higher-lying excited states of the anion. In summary, we have probed the geometric and electronic structure of a polyatomic molecule using time-resolved photoelectron imaging. In combination with calculations beyond the Franck-Condon region, we can identify specific signals that arise from an isomerisation coordinate involving rotation about the single bond in pCK. The photoelectron signal provides information about changes in the energies of potential energy surfaces along an intramolecular coordinate, while the photoelectron angular distributions capture the changes in electronic structure that arise from such an isomerisation. While we can conclusively identify single-bond rotation, we cannot exclude that double-bond rotation may be occurring also, because its photoelectron signatures are not captured well in the current experiments. To the best of our knowledge, this presents the first study in which lab-frame photoelectron angular distributions have been tracked along a non-dissociative adiabatic coordinate and that have been quantitatively modelled. These methods provide a basis for probing adiabatic dynamics in large molecular systems. ## Methods ### Experimental details Experiments were performed on an anion photoelectron imaging spectrometer33. Anions were produced by negative-mode electrospray ionization of pCK in methanol at pH ~ 10 and transferred into vacuum where they were stored in a ring-electrode trap, thermalized to ~300 K, and unloaded into a time-of-flight mass spectrometer at 100 Hz. Mass-selected anions were intersected by a pair of delayed femtosecond pulses at the centre of a velocity-map imaging spectrometer, which monitored the velocity vectors of the emitted photoelectrons. Probe pulses used the fundamental of a Ti:Sapph (450 μJ pulse−1) and pump pulses were generated by 4th harmonic generation of idler of an OPA (5 μJ pulse−1) and interacted with the sample unfocussed (beam diameter ~ 3 mm). Pump and probe polarizations were set parallel to the detector. The temporal instrument response is 100 fs and times are accurate to better than ±10 fs. Raw photoelectron images were analysed using polar onion peeling34, which recovers the 3D electron velocity distribution from the 2D projection measured on the position sensitive detector (see Supplementary Methods and Supplementary Fig. 5). This analysis yields photoelectron spectra and PADs that were calibrated using the photoelectron spectrum of iodide. ### Computational details The energetic minima corresponding to FC(S0), the planar S1 state and SB(S1) and DB(S1) were first located at the SA2-CASSCF(12,11)/6-31G* level of theory (see Supplementary Fig. 6 and Supplementary Table 1)35. Linear interpolation in internal coordinates (LIIC) pathways were obtained to link the different critical points. An LIIC pathway gives the most straightforward path from a given molecular geometry to a different geometry by interpolating a series of geometries in between, using internal (not Cartesian) coordinates (see for example ref. 36). It is important to note that no reoptimisation of the molecular geometries is performed along these pathways, implying that LIIC pathways do not correspond to minimum energy paths, per se. In particular, the barriers observed along LIIC pathways are possibly higher than the actual barriers one would obtain by searching for proper transition states. LIICs, however, offer a clear picture of the possible pathways between critical points of potential energy surfaces and allow to predict photophysical and photochemical processes that a molecule can undergo. The electronic energy of the S1, S2 and D0 states were recalculated at all points along the LIICs using multi-state extended multi-configurational quasi-degenerate perturbation theory (MS-XMCQDPT2)37 to correct for the lack of dynamic correlation at the SA-CASSCF level. The (aug)-cc-pVTZ basis set was used where the augmented function was only affixed to the oxygen atoms38. The D0 was calculated through addition of an orbital characterized by an extremely diffuse p-function (α = 1E–10) in the active space and included in the 6 state averaging procedure to mimic detachment to the continuum39,40,41. A rigid shift was applied to match the S0–D0 energy to the experimentally determined vertical detachment energy of 2.94 ± 0.05 eV at the Franck-Condon geometry. A DFT/PBE0-based one-electron Fock-type matrix was used to obtain energies of MCSCF semi-canonical orbitals used in perturbation theory as done elsewhere39,40,41. The Dyson orbitals for critical geometries were calculated using EOM-EE/IP-CCSD/6-31+G** 27,28,42,43 and the PADs were modelled using ezDyson v444. EOM-EE-CCSD calculations with the 6-31+G** basis set were also used to determine the vertical excitation energies of the first excited state of the neutral, D1, at the minimum energy geometries on the S1 surface. The initial SA-CASSCF calculations were performed with Molpro 201245, XMCQDPT2 calculations were carried out using the Firefly quantum chemistry package46 and EOM-EE/IP-CCSD calculations used QChem 547.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8507168889045715, "perplexity": 1769.0408077898712}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948632.20/warc/CC-MAIN-20230327123514-20230327153514-00344.warc.gz"}
https://socratic.org/questions/how-do-you-solve-e-x-e-x-1
Precalculus Topics # How do you solve e^(x) + e^(-x) = 1? Jun 13, 2016 There are no Real solutions, but: $x = \left(\pm \frac{\pi}{3} + 2 k \pi\right) i$ for any integer $k$ #### Explanation: $\textcolor{w h i t e}{}$ Method 1 - trigonometric Let $x = i t$ and divide both sides of the equation by $2$ to get: $\frac{1}{2} = \frac{{e}^{i t} + {e}^{- i t}}{2} = \cos \left(t\right)$ So $t = \pm {\cos}^{- 1} \left(\frac{1}{2}\right) + 2 k \pi = \pm \frac{\pi}{3} + 2 k \pi$ So $x = \frac{t}{i} = \left(\pm \frac{\pi}{3} + 2 k \pi\right) i$ $\textcolor{w h i t e}{}$ Method 2 - logarithm Let $t = {e}^{x}$ Then this equation becomes: $t + \frac{1}{t} = 1$ Multiply through by $t$ and rearrange a little to get: $0 = {t}^{2} - t + 1$ $= {\left(t - \frac{1}{2}\right)}^{2} + \frac{3}{4}$ $= {\left(t - \frac{1}{2}\right)}^{2} - {\left(\frac{\sqrt{3}}{2} i\right)}^{2}$ $= \left(t - \frac{1}{2} - \frac{\sqrt{3}}{2} i\right) \left(t - \frac{1}{2} + \frac{\sqrt{3}}{2} i\right)$ So: ${e}^{x} = t = \frac{1}{2} \pm \frac{\sqrt{3}}{2} i$ Hence: $x = \ln \left(\frac{1}{2} \pm \frac{\sqrt{3}}{2} i\right) + 2 k \pi i$ for any integer $k$ Note that we can add any integer multiple of $2 \pi i$ since ${e}^{2 \pi i} = {e}^{- 2 \pi i} = 1$ Now: $\ln \left(\frac{1}{2} \pm \frac{\sqrt{3}}{2} i\right) = \ln \left\mid \frac{1}{2} \pm \frac{\sqrt{3}}{2} i \right\mid + A r g \left(\frac{1}{2} \pm \frac{\sqrt{3}}{2} i\right) i$ $= \ln \left(\sqrt{{\left(\frac{1}{2}\right)}^{2} + {\left(\frac{\sqrt{3}}{2}\right)}^{2}}\right) \pm {\tan}^{- 1} \left(\sqrt{3}\right)$ $= \ln \left(1\right) \pm \frac{\pi}{3}$ $= \pm \frac{\pi}{3}$ ##### Impact of this question 223 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 25, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4059359133243561, "perplexity": 1253.415998283472}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987834649.58/warc/CC-MAIN-20191023150047-20191023173547-00505.warc.gz"}
http://mechanicaldesign.asmedigitalcollection.asme.org/article.aspx?articleid=1449083
0 RESEARCH PAPERS # Risk-Based Decision-Making for Managing Resources During the Design of Complex Space Exploration Systems [+] Author and Article Information Ali Farhang Mehr QSS Group,  NASA Ames Research Center, M/S 269, Moffett Field, CA [email protected] Irem Y. Tumer NASA Ames Research Center, M/S 269, Moffett Field, CA [email protected] Note that the RUBIC design methodology is more general in principle and can be readily extended and generalized to other forms of modeling the design process (other than functional modeling). In this paper, $E(.)$ and Var (.) refer to the expected value and variance of a random process, respectively. In this paper, $σii$ denotes $Var(bi)$; $σij$ refers to $Cov(bi,bj)$, and $σi$ (with one index) refers to the standard deviation of $bi$, i.e., $σii=σi2$. In our future research, we will consider using nonlinear S-shaped benefit functions that will taper off after a certain amount of investment (i.e., decreasing marginal risk reduction as the amount of investment increases.) For example, one might consider using a logistics s-curve function for the amount of risk reduction versus investment. This will eliminate the need for imposing non-negative constraints and will also better represent the reality where the actual added value of investing another dollar will taper off at a certain point. However, a nonlinear assumption will significantly complicate the portfolio optimization problem of the next section and is, therefore, left as part of the future research. Note that we chose $σ$(TB) instead of Var(TB) because $σ$(TB) and E(TB) have the same units and can be used in a linear combination. FMEA for instance, assigns a value to the failure rate based on reasonable estimations of the probability of occurrence obtained from experienced designers. J. Mech. Des 128(4), 1014-1022 (Jan 29, 2006) (9 pages) doi:10.1115/1.2205868 History: Received November 18, 2005; Revised January 29, 2006 ## Abstract Complex space exploration systems are often designed in collaborative engineering environments where requirements and design decisions by various subsystem engineers have a great impact on the overall risk of the mission. As a result, the system-level management should allocate risk mitigation resources (e.g., capital to place additional sensors or to improve the current technology) among various risk elements such that the main objectives of the system are achieved as closely as possible. Minimizing risk has been long accepted as one of the major drivers for system-level decisions and particularly resource management. In this context, Risk-Based Decision Making refers to a process that allocates resources in such a way that the expected risk of the overall system is minimized. This paper presents a new risk-based design decision-making method, referred to as Risk and Uncertainty Based Concurrent Integrated Design Methodology or RUBIC Design Methodology for short. The new approach is based on concepts from portfolio optimization theory and continuous resource management, extended to provide a mathematical rigor for risk-based decision-making during the design of complex space exploration systems. The RUBIC design method is based on the idea that a unit of resource, allocated to mitigate a certain risk in the system, contributes to the overall system risk reduction in the following two ways: (1) by mitigating that particular risk; and (2) by impacting other risk elements in the system (i.e., the correlation among various risk elements). RUBIC then provides a probabilistic framework for reducing the expected risk of the final system via optimal allocation of available risk-mitigation resources. The application of the proposed approach is demonstrated using a satellite reaction wheel example. <> ## Figures Figure 1 A high-level functional model of a satellite reaction wheel at some point in its conceptual design phase. A satellite reaction wheel is used to position spacecrafts in the desired direction. Four major subsystems can be identified in this design (distinguished using different shades in the figure). Figure 2 Triangular distribution for the random variable, bi. The benefit of investing a unit of resources is usually measured in dollars (x-axis). The y-axis shows the probability distribution function. Figure 3 The thick curve represents the efficient frontier Figure 4 A snapshot of the development window in the web-based RUBIC design tool (developed at NASA Ames Research Center) Figure 5 The risk efficient frontier (total resources spent to mitigate risk (investment) = 100 units (or $1M); worst case scenario consequence (cost of loss of mission) =$20M; maximum expected risk reduction (vertical asymptote) = \$2.6M) ## Discussions Some tools below are only available to our subscribers or users with an online account. ### Related Content Customize your page view by dragging and repositioning the boxes below. Related Journal Articles Related Proceedings Articles Related eBook Content Topic Collections
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 10, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37155815958976746, "perplexity": 1410.6442126116879}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948517181.32/warc/CC-MAIN-20171212134318-20171212154318-00519.warc.gz"}
http://www.neverendingbooks.org/tag/consani
# Tag: Consani ‘Gabriel’s topos’ (see here) is the conjectural, but still elusive topos from which the validity of the Riemann hypothesis would follow. It is the latest attempt in Alain Connes’ 20 year long quest to tackle the RH (before, he tried the tools of noncommutative geometry and later those offered by the field with one element). For the last 5 years he hopes that topos theory might provide the missing ingredient. Together with Katia Consani he introduced and studied the geometry of the Arithmetic site, and later the geometry of the scaling site. If you look at the points of these toposes you get horribly complicated ‘non-commutative’ spaces, such as the finite adele classes $\mathbb{Q}^*_+ \backslash \mathbb{A}^f_{\mathbb{Q}} / \widehat{\mathbb{Z}}^{\ast}$ (in case of the arithmetic site) and the full adele classes $\mathbb{Q}^*_+ \backslash \mathbb{A}_{\mathbb{Q}} / \widehat{\mathbb{Z}}^{\ast}$ (for the scaling site). In Vienna, Connes gave a nice introduction to the arithmetic site in two lectures. The first part of the talk below also gives an historic overview of his work on the RH The second lecture can be watched here. However, not everyone is as optimistic about the topos-approach as he seems to be. Here’s an insightful answer on MathOverflow by Will Sawin to the question “What is precisely still missing in Connes’ approach to RH?”. Other interesting MathOverflow threads related to the RH-approach via the field with one element are Approaches to Riemann hypothesis using methods outside number theory and Riemann hypothesis via absolute geometry. About a month ago, from May 10th till 14th Alain Connes gave a series of lectures at Ohio State University with title “The Riemann-Roch strategy, quantizing the Scaling Site”. The accompanying paper has now been arXived: The Riemann-Roch strategy, Complex lift of the Scaling Site (joint with K. Consani). Especially interesting is section 2 “The geometry behind the zeros of $\zeta$” in which they explain how looking at the zeros locus inevitably leads to the space of adele classes and why one has to study this space with the tools from noncommutative geometry. Perhaps further developments will be disclosed in a few weeks time when Connes is one of the speakers at Toposes in Como. A couple of weeks ago, Alain Connes and Katia Consani arXived their paper “On the notion of geometry over $\mathbb{F}_1$”. Their subtle definition is phrased entirely in Grothendieck‘s scheme-theoretic language of representable functors and may be somewhat hard to get through if you only had a few years of mathematics. I’ll try to give the essence of their definition of an affine scheme over $\mathbb{F}_1$ (and illustrate it with an example) in a couple of posts. All you need to know is what a finite Abelian group is (if you know what a cyclic group is that’ll be enough) and what a commutative algebra is. If you already know what a functor and a natural transformation is, that would be great, but we’ll deal with all that abstract nonsense when we’ll need it. So take two finite Abelian groups A and B, then a group-morphism is just a map $f~:~A \rightarrow B$ preserving the group-data. That is, f sends the unit element of A to that of B and f sends a product of two elements in A to the product of their images in B. For example, if $A=C_n$ is a cyclic group of order n with generator g and $B=C_m$ is a cyclic group of order m with generator h, then every groupmorphism from A to B is entirely determined by the image of g let’s say that this image is $h^i$. But, as $g^n=1$ and the conditions on a group-morphism we must have that $h^{in} = (h^i)^n = 1$ and therefore m must divide i.n. This gives you all possible group-morphisms from A to B. They are plenty of finite abelian groups and many group-morphisms between any pair of them and all this stuff we put into one giant sack and label it $\mathbf{abelian}$. There is another, even bigger sack, which is even simpler to describe. It is labeled $\mathbf{sets}$ and contains all sets as well as all maps between two sets. Right! Now what might be a map $F~:~\mathbf{abelian} \rightarrow \mathbf{sets}$ between these two sacks? Well, F should map any abelian group A to a set F(A) and any group-morphism $f~:~A \rightarrow B$ to a map between the corresponding sets $F(f)~:~F(A) \rightarrow F(B)$ and do all of this nicely. That is, F should send compositions of group-morphisms to compositions of the corresponding maps, and so on. If you take a pen and a piece of paper, you’re bound to come up with the exact definition of a functor (that’s what F is called). You want an example? Well, lets take F to be the map sending an Abelian group A to its set of elements (also called A) and which sends a groupmorphism $A \rightarrow B$ to the same map from A to B. All F does is ‘forget’ the extra group-conditions on the sets and maps. For this reason F is called the forgetful functor. We will denote this particular functor by $\underline{\mathbb{G}}_m$, merely to show off. Luckily, there are lots of other and more interesting examples of such functors. Our first class we will call maxi-functors and they are defined using a finitely generated $\mathbb{C}$-algebra R. That is, R can be written as the quotient of a polynomial algebra $R = \frac{\mathbb{C}[x_1,\ldots,x_d]}{(f_1,\ldots,f_e)}$ by setting all the polynomials $f_i$ to be zero. For example, take R to be the ring of Laurant polynomials $R = \mathbb{C}[x,x^{-1}] = \frac{\mathbb{C}[x,y]}{(xy-1)}$ Other, and easier, examples of $\mathbb{C}$-algebras is the group-algebra $\mathbb{C} A$ of a finite Abelian group A. This group-algebra is a finite dimensional vectorspace with basis $e_a$, one for each element $a \in A$ with multiplication rule induced by the relations $e_a.e_b = e_{a.b}$ where on the left-hand side the multiplication . is in the group-algebra whereas on the right hand side the multiplication in the index is that of the group A. By choosing a different basis one can show that the group-algebra is really just the direct sum of copies of $\mathbb{C}$ with component-wise addition and multiplication $\mathbb{C} A = \mathbb{C} \oplus \ldots \oplus \mathbb{C}$ with as many copies as there are elements in the group A. For example, for the cyclic group $C_n$ we have $\mathbb{C} C_n = \frac{\mathbb{C}[x]}{(x^n-1)} = \frac{\mathbb{C}[x]}{(x-1)} \oplus \frac{\mathbb{C}[x]}{(x-\zeta)} \oplus \frac{\mathbb{C}[x]}{(x-\zeta^2)} \oplus \ldots \oplus \frac{\mathbb{C}[x]}{(x-\zeta^{n-1})} = \mathbb{C} \oplus \mathbb{C} \oplus \mathbb{C} \oplus \ldots \oplus \mathbb{C}$ The maxi-functor asociated to a $\mathbb{C}$-algebra R is the functor $\mathbf{maxi}(R)~:~\mathbf{abelian} \rightarrow \mathbf{sets}$ which assigns to a finite Abelian group A the set of all algebra-morphism $R \rightarrow \mathbb{C} A$ from R to the group-algebra of A. But wait, you say (i hope), we also needed a functor to do something on groupmorphisms $f~:~A \rightarrow B$. Exactly, so to f we have an algebra-morphism $f’~:~\mathbb{C} A \rightarrow \mathbb{C}B$ so the functor on morphisms is defined via composition $\mathbf{maxi}(R)(f)~:~\mathbf{maxi}(R)(A) \rightarrow \mathbf{maxi}(R)(B) \qquad \phi~:~R \rightarrow \mathbb{C} A \mapsto f’ \circ \phi~:~R \rightarrow \mathbb{C} A \rightarrow \mathbb{C} B$ So, what is the maxi-functor $\mathbf{maxi}(\mathbb{C}[x,x^{-1}]$? Well, any $\mathbb{C}$-algebra morphism $\mathbb{C}[x,x^{-1}] \rightarrow \mathbb{C} A$ is fully determined by the image of $x$ which must be a unit in $\mathbb{C} A = \mathbb{C} \oplus \ldots \oplus \mathbb{C}$. That is, all components of the image of $x$ must be non-zero complex numbers, that is $\mathbf{maxi}(\mathbb{C}[x,x^{-1}])(A) = \mathbb{C}^* \oplus \ldots \oplus \mathbb{C}^*$ where there are as many components as there are elements in A. Thus, the sets $\mathbf{maxi}(R)(A)$ are typically huge which is the reason for the maxi-terminology. Next, let us turn to mini-functors. They are defined similarly but this time using finitely generated $\mathbb{Z}$-algebras such as $S=\mathbb{Z}[x,x^{-1}]$ and the integral group-rings $\mathbb{Z} A$ for finite Abelian groups A. The structure of these inegral group-rings is a lot more delicate than in the complex case. Let’s consider them for the smallest cyclic groups (the ‘isos’ below are only approximations!) $\mathbb{Z} C_2 = \frac{\mathbb{Z}[x]}{(x^2-1)} = \frac{\mathbb{Z}[x]}{(x-1)} \oplus \frac{\mathbb{Z}[x]}{(x+1)} = \mathbb{Z} \oplus \mathbb{Z}$ $\mathbb{Z} C_3 = \frac{\mathbb{Z}[x]}{(x^3-1)} = \frac{\mathbb{Z}[x]}{(x-1)} \oplus \frac{\mathbb{Z}[x]}{(x^2+x+1)} = \mathbb{Z} \oplus \mathbb{Z}[\rho]$ $\mathbb{Z} C_4 = \frac{\mathbb{Z}[x]}{(x^4-1)} = \frac{\mathbb{Z}[x]}{(x-1)} \oplus \frac{\mathbb{Z}[x]}{(x+1)} \oplus \frac{\mathbb{Z}[x]}{(x^2+1)} = \mathbb{Z} \oplus \mathbb{Z} \oplus \mathbb{Z}[i]$ For a $\mathbb{Z}$-algebra S we can define its mini-functor to be the functor $\mathbf{mini}(S)~:~\mathbf{abelian} \rightarrow \mathbf{sets}$ which assigns to an Abelian group A the set of all $\mathbb{Z}$-algebra morphisms $S \rightarrow \mathbb{Z} A$. For example, for the algebra $\mathbb{Z}[x,x^{-1}]$ we have that $\mathbf{mini}(\mathbb{Z} [x,x^{-1}]~(A) = (\mathbb{Z} A)^*$ the set of all invertible elements in the integral group-algebra. To study these sets one has to study the units of cyclotomic integers. From the above decompositions it is easy to verify that for the first few cyclic groups, the corresponding sets are $\pm C_2, \pm C_3$ and $\pm C_4$. However, in general this set doesn’t have to be finite. It is a well-known result that the group of units of an integral group-ring of a finite Abelian group is of the form $(\mathbb{Z} A)^* = \pm A \times \mathbb{Z}^{\oplus r}$ where $r = \frac{1}{2}(o(A) + 1 + n_2 -2c)$ where $o(A)$ is the number of elements of A, $n_2$ is the number of elements of order 2 and c is the number of cyclic subgroups of A. So, these sets can still be infinite but at least they are a lot more manageable, explaining the mini-terminology. Now, we would love to go one step deeper and define nano-functors by the same procedure, this time using finitely generated algebras over $\mathbb{F}_1$, the field with one element. But as we do not really know what we might mean by this, we simply define a nano-functor to be a subfunctor of a mini-functor, that is, a nano-functor N has an associated mini-functor $\mathbf{mini}(S)$ such that for all finite Abelian groups A we have that $N(A) \subset \mathbf{mini}(S)(A)$. For example, the forgetful functor at the beginning, which we pompously denoted $\underline{\mathbb{G}}_m$ is a nano-functor as it is a subfunctor of the mini-functor $\mathbf{mini}(\mathbb{Z}[x,x^{-1}])$. Now we are allmost done : an affine $\mathbb{F}_1$-scheme in the sense of Connes and Consani is a pair consisting of a nano-functor N and a maxi-functor $\mathbf{maxi}(R)$ such that two rather strong conditions are satisfied : • there is an evaluation ‘map’ of functors $e~:~N \rightarrow \mathbf{maxi}(R)$ • this pair determines uniquely a ‘minimal’ mini-functor $\mathbf{mini}(S)$ of which N is a subfunctor of course we still have to turn this into proper definitions but that will have to await another post. For now, suffice it to say that the pair $~(\underline{\mathbb{G}}_m,\mathbf{maxi}(\mathbb{C}[x,x^{-1}]))$ is a $\mathbb{F}_1$-scheme with corresponding uniquely determined mini-functor $\mathbf{mini}(\mathbb{Z}[x,x^{-1}])$, called the multiplicative group scheme. Continued here
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9358854293823242, "perplexity": 328.16526050774587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314130.7/warc/CC-MAIN-20190818205919-20190818231919-00060.warc.gz"}
https://damask.mpie.de/Documentation/IsoStrain?sortcol=0;table=1;up=0
## History The isostrain assumption states that all crystals of the body deform exactly as the entire body. It is often also referred to as »full constraints (FC) Taylor« assumption as Taylor (1938) first applied it for the prediction of the deformation behavior of polycrystals. ## Deformation partitioning In the framework of finite strain, the isostrain assumption can formally be written as equality of the deformation gradient of each grain $g$ with the average deformation gradient of the body $\cal{B}$ $$\label{eq: isostrain} \tnsr F^{g} = \bar{\tnsr F}\quad \forall \; g \in \cal{B}$$ or in rate form $$\label{eq: isostrain rate} \dot{\tnsr F^{g}} = \dot{\bar{\tnsr F}}\quad \forall \; g \in \cal{B}$$ at any time. ## Stress response Typically, a different stress $\tnsr P^g$ will be required in each grain $g$ to obtain the prescribed deformation. This, for instance, can be due to anisotropy or different constitutive behavior (strength) of the grains. The isostrain scheme, therefore, usually violates stress equilibrium among the constituent grains. ### grain average (default) The stress at the material point is calculated as the average of the stresses of all grains: $$\label{eq: stress average} \bar{\tnsr P}= \sum_{g=1}^{N}\nu^g\tnsr P^g$$ with $\nu^g = 1/N$ the (constant) volume fraction of grain $g$. This is the default behavior for the isostrain homogenization scheme. ### grains in parallel In case all grains act in parallel, the stress at the material point is taken as the sum of the stresses of all grains: $$\label{eq: stress sum} \bar{\tnsr P}= \sum_{g=1}^{N}\tnsr P^g.$$ ## Material configuration ### Parameters To select the isostrain homogenization scheme and set the above parameters use the following (case-insensitive) naming scheme in a material.config file: key value comment sum parallel stress calculation according to \eqref{eq: stress sum} mapping avg mean average stress calculation according to \eqref{eq: stress average} Ngrains $N$ number of grains (of equal volume) at material point type isostrain ### Outputs key output (output) Ngrains report $N$ at material point ### Boolean flags flag comment /echo/ copy whole section to output log ## References [1] Taylor, G. I. Plastic strain in metals J. Inst. Metals 62 (1938) 307–324 Topic revision: r11 - 14 Dec 2015, PhilipEisenlohr • News 26 Mar 2019 DREAM.3D 6.5.119 (released 2019/03/22) comes with a DAMASK export filter 25 Mar 2019 Release of version v2.0.3 21 Jan 2019 DAMASK overview paper finally published with full citation information available 01 Dec 2018 17 Sep 2018 CMCn2018 & DAMASK user meeting to be hosted at Max-Planck-Institut für Eisenforschung 22 May 2018 Release of version v2.0.2 01 Sep 2016 CMCn2016 & DAMASK user meeting to be hosted at Max-Planck-Institut für Eisenforschung 25 Jul 2016 Release of version v2.0.1 08 Mar 2016 Release of version v2.0.0 22 Feb 2016 New webserver up and running 09 Feb 2016 Migrated code repository from Subversion to GitLab 17 Dec 2014 Release of revision 3813 14 May 2014 Release of revision 3108 02 Apr 2014 Release of revision 3062 16 Oct 2013 Release of revision 2689 15 Jul 2013 Release of revision 2555 15 Feb 2013 Release of revision 2174 13 Feb 2013 documentation 16 Dec 2012 rendering 23 Nov 2012 Release of revision 1955 15 Nov 2012 Release of revision 1924 01 Nov 2012 Updated sidebar 30 Oct 2012 Significant website updates and content extensions Copyright by the contributing authors. All material on this collaboration platform is the property of the contributing authors. Ideas, requests, problems regarding DAMASK? Send feedback § Imprint § Data Protection
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 4, "x-ck12": 0, "texerror": 0, "math_score": 0.799399733543396, "perplexity": 15610.084492727547}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540528457.66/warc/CC-MAIN-20191210152154-20191210180154-00411.warc.gz"}
https://www.groundai.com/project/mechanical-detection-of-carbon-nanotube-resonator-vibrations/
Mechanical detection of carbon nanotube resonator vibrations # Mechanical detection of carbon nanotube resonator vibrations D. Garcia-Sanchez, A. San Paulo, M.J. Esplandiu, F. Perez-Murano, L. Forró, A. Aguasca, A. Bachtold ICN, Campus UABarcelona, E-08193 Bellaterra, Spain. CNM-CSIC, Campus UABarcelona, E-08193 Bellaterra, Spain. EPFL, CH-1015, Lausanne, Switzerland. Universitat Politecnica de Catalunya, Barcelona, Spain. July 12, 2019 ###### Abstract Bending-mode vibrations of carbon nanotube resonator devices were mechanically detected in air at atmospheric pressure by means of a novel scanning force microscopy method. The fundamental and higher order bending eigenmodes were imaged at up to with sub-nanometer resolution in vibration amplitude. The resonance frequency and the eigenmode shape of multi-wall nanotubes are consistent with the elastic beam theory for a doubly clamped beam. For single-wall nanotubes, however, resonance frequencies are significantly shifted, which is attributed to fabrication generating, for example, slack. The effect of slack is studied by pulling down the tube with the tip, which drastically reduces the resonance frequency. ###### pacs: 85.85.+j, 73.63.Fg, 81.16.Rf, 85.35.Kt Carbon nanotubes offer unique opportunities as high-frequency mechanical resonators for a number of applications. Nanotubes are ultra light, which is ideal for ultralow mass detection and ultrasensitive force detection APoncharalScience1999 (); AReuletPRL2000 (). Nanotubes are also exceptionally stiff, making the resonance frequency very high. This is interesting for experiments that manipulate and entangle mechanical quantum states ABlencowePhysReports2004 (); AHayeScience2004 (); AKnobelNature2003 (). However, mechanical vibrations of nanotubes remain very difficult to detect. Detection has been achieved with transmission or scanning electron microscopy APoncharalScience1999 (); ABabicNanoLett2003 (); Meyer (); AJensenPRL2006 (), and field-emission APurcellPRL2004 (). More recently, a capacitative technique has been reported  ASazanovaNature2004 (); APengPRL2006 (); Witkamp () that allows detection for nanotubes integrated in a device, and is particulary promising for sensing and quantum electromechanical experiments. A limitation of this capacitive technique is that the measured resonance peaks often cannot be assigned to their eigenmodes. In addition, it is often difficult to discern resonance peaks from artefacts of the electrical circuit. It is thus desirable to develop a method that allows the characterization of these resonances. In this letter, we demonstrate a novel characterization method of nanotube resonator devices, based on mechanical detection by scanning force microscopy (SFM). This method enables the detection of the resonance frequency () in air at atmospheric pressure and the imaging of the mode-shape for the first bending eigenmodes. Measurements on single-wall nanotubes (SWNT) show that the resonance frequency is very device dependent, and that dramatically decreases as slack is introduced. We show that multi-wall nanotube (MWNT) resonators behave differently from SWNT resonators. The resonance properties of MWNTs are much more reproducible, and are consistent with the elastic beam theory for a doubly clamped beam without any internal tension. An image of one nanotube resonator used in these experiments is shown in Fig. 1(a). The resonator consists of a SWNT grown by chemical-vapour deposition AKongNature1998 () or a MWNT synthesized by arc-discharge evaporation ABonardAdvMater1997 (). The nanotube is connected to two Cr/Au electrodes patterned by electron-beam lithography on a high-resistivity Si substrate () with a thermal silicon dioxide layer. The nanotube is released from the substrate during a buffered HF etching step. The Si substrate is fixed for SFM measurements on a home-made chip carrier with transmission lines. A schematic of the measurement method is presented in Fig. 1(b). The nanotube motion is electrostatically actuated with an oscillating voltage applied on a side gate electrode. As the driving frequency approaches the resonance frequency of the nanotube, the nanotube vibration becomes large. In addition, the amplitude of the resonator vibration is 100% modulated at , which can be seen as sequentially turning on and off the vibration. The resulting envelope of the vibration amplitude is sensed by the SFM cantilever. Note that the SFM cantilever has a limited bandwidth response so it cannot follow the rapid vibrations at  NSFMMicro (). The SFM is operated in tapping mode to minimize the forces applied on the nanotube by the SFM cantilever. The detection of the vibrations is optimized by matching to the resonance frequency of the first eigenmode of the SFM cantilever. As a result, the first cantilever eigenmode is excited with an amplitude proportional to the nanotube amplitude, which is measured with a lock-in amplifier tuned at . The second eigenmode of the SFM cantilever is used for topography imaging in order to suppress coupling between topography and vibration detections (see Fig. 1(c)). Note that in-plane nanotube vibrations can be detected by means of the interaction between the nanotube and the tip side, or asperities at the tip apex. We start discussing measurements on MWNTs. Suspended MWNTs stay straighter than SWNTs and are thus more suitable to test the technique. Figures 2(a-e) show the topography and the nanotube vibration images obtained at different actuation frequencies. The different shapes of the vibrations are attributed to different bending eigenmodes. Zero, one, and two nodes correspond to the first, second and third order bending eigenmodes. Figure 2(g) shows the resonance peak of the fundamental eigenmode for another MWNT device. The resonance frequency at is remarkably high. It is higher than the reported resonance frequency of doubly clamped resonators based on nanotube or other materials APengPRL2006 (); AHuangNature2003 (). The quality factor is . The quality factor of the other tubes that we have studied is 3-20. We now compare these results with the elastic beam theory for a doubly clamped beam. The displacement is given by  BClelandFoundationsNanomechanics () ρπr2∂2z∂t2+EI∂4z∂x4−T∂2z∂x2=0 (1) with the density of graphite, the radius, the Young modulus, the momenta of inertia, and the tension in the tube. Assuming that , , and at and , the resonance frequencies are  BClelandFoundationsNanomechanics () fn=β2n4πrL2√Eρ (2) with , , , and the length. Table 1 shows the resonance frequency for all the measured MWNTs remark (). Measured span over two orders of magnitudes, between 51 MHz and 3.1 GHz. Eq. 2 describes rather accurately these measured when is set at . This value of is consistent with results on similarly prepared MWNT devices ALefevrePRL2005 (). Such a good agreement is remarkable, since rather large deviations from Eq. 2 have been reported for nanoscale resonators made of other materials BClelandFoundationsNanomechanics (); AHussainAPhysLett2003 (). These deviations have been attributed to the tension or slack (also called buckling) that can result during fabrication. Our measurements suggest that tension and slack have little effect on the resonances of MWNTs. We attribute this to the high mechanical rigidity of MWNTs, which makes deformation difficult to occur AHertelPRB1998 (). This result may be interesting for certain applications, such as radio-frequency signal processing AWangIEEE2004 (), where the resonance frequency has to be predetermined. We now look at the spatial shape of the vibrations. The maximum displacement is given by with solution of Eq.1 with ,  BClelandFoundationsNanomechanics () zn=an(cos(βnxL)−cosh(βnxL))+bn(sin(βnxL)−sinh(βnxL)) (3) with -1.017, -0.9992, and -1.00003. When damping is described within the context of Zener’s model, we have BClelandFoundationsNanomechanics () αn=14π3r2ρL31f2n−f2RF−\textmdif2n/Qn∫L0zn(x)Fext(x)\textmddx (4) with being the quality factor measured for each eigenmode, and the external force. and are the DC and the AC voltages applied on the gate, and the capacitance between the gate and the tube. The precise estimate of is very challenging due to the difficulty of determining . The most difficult task is to account for the asymmetric gate and for the screening of the clamping electrodes. As a simplification, we use along a certain portion of the tube, and otherwise. We use and as fitting parameters. A third fitting parameter is the linear conversion of the displacement of the tube into the one of the cantilever that is measured remark2 (). Fig. 2(f) shows the results of the calculations. The model qualitatively reproduces the overall shape of the measured eigenmodes as well as the ratio between the amplitudes of the different eigenmodes. In addition, the model predicts that the displacement at the nodes is different from zero, as shown in the measurements. This is due to the low , so the first eigenmode contributes to the displacement even at of the second or the third eigenmode. These calculations allow for an estimate of the tube displacement, which is 0.2 nm for the fundamental eigenmode (Fig. 2(f)). We emphasize that this estimate indicates only the order of the magnitude of the actual vibration amplitude, since crude simplifications have been used for . The vibration amplitude for the other devices is estimated to be low as well, between 0.1 pm and 0.5 nm. Notice that we find that is quite comparable to (Fig. 2(g,f)). We are pursuing numerical simulations taking into account the microscopic tube-tip interaction that support this. We turn our attention to the quality factor. The low may be attributed to the disturbance of the SFM tip. Note, however, that the topography feedback is set at the limit of cantilever retraction, for which the tube-tip interaction is minimum. Moreover, we have noticed no change in the quality factor as the amplitude setpoint of the SFM cantilever is reduced by 3-5% from the limit of cantilever retraction, which corresponds to the enhancement of the tube-tip interaction. This suggests that the tip is not the principal source of dissipation. The low may be attributed to collision with air molecules. Indeed, previous measurements in vacuum on similarly prepared resonators show a between 10 and 200  ASazanovaNature2004 (); Witkamp (), which is larger than 3-20, the we have obtained. In addition, we can estimate in the molecular regime using with the effective mass of the beam,  ms the velocity of air molecules, and the pressure  erkinci (). We get for the tube in Fig. 2(a), which is not too far from , the value we have measured. Note that the molecular regime holds for a mean free path of air molecules that is larger than the resonator dimensions. 65 nm at 1 atm, so we are at the limit of the applicability of this regime. Overall, a more systematic study should be carried out to clearly identify the origin of the low . Having shown that SFM successfully detects mechanical vibrations of MWNTs, we now look at SWNTs (Fig. 3(a)). Table 1 shows poor agreement between the measured resonance frequencies and the values expected from a doubly clamped beam. We attribute this to tension or slack. When the tube is elongated by due to tension, the resonance frequency increases and becomes when  ASapmazPRB2003 (). The measured frequency of the long SWNT in Tab. 1 is 128% larger than what is expected for a beam without tension. This deviation can be accounted by elongation ( is 0.2 pm). This suggests that even a weak elongation can dramatically shift the resonance frequency. Such an elongation can result, for example, from the bending of the partially suspended Cr/Au electrodes. Table 1 shows that the resonance frequencies of other SWNTs can be below the one expected from a doubly clamped beam. This may result from the additional mass of contamination adsorbed on the tube APoncharalScience1999 (); AReuletPRL2000 (). This may also be the consequence of slack, which occurs when the tube is longer than the distance between the electrodes AUstunel2005 (). To further investigate the effect of slack, we have introduced slack in a non-reversible way by pulling down the tube with the SFM cantilever. Figure 3(b) shows that can be divided by two for a slack below 1%. The slack is defined as with being the tube length and the separation between the clamping points. Taking into account slack, Eq. 1 has been solved analytically only for in-plane vibrations (plane of the buckled beam) Nayfeh (). Recent numerical calculations have extended this treatment to out-of-plane vibrations AUstunel2005 (). It has been shown that of the fundamental eigenmode can even be zero when no force is applied on the beam. The schematic in Fig. 3(b) shows the physics of this effect. For zero slack, the beam motion can be described by a spring with the spring force that results from the tube bending. When slack is introduced, the fundamental eigenmode is called ”jump rope” AUstunel2005 (). It is similar to a mass attached to a point through a massless rod of length . does not depend on bending anymore but is with being an external force, which can be the electrostatic force between the tube and the side gate. We get for . We estimate the reduction of when the slack passes from 0.3 to 0.9% in Fig. 3(b). Assuming that stays constant, and using  AUstunel2005 (), we expect a reduction by a factor of about 1.3, which is consistent with the experiment, since passes from 142 to . More studies should be done, in particular to relate to , but also to understand the effect of the boundary conditions at the clamping points. The section of the nanotube in contact with the electrodes may be bended, especially after SFM manipulation, so that at and . Overall, these results show that SFM, as a tool to visualize the spatial distribution of the vibrations, is very useful to characterize eigenmodes of SWNT resonator devices. In addition, SFM detection provides unique information about the physics of nanotube resonators such as the effect of slack. Further studies will be carried out on slack for which interesting predictions have been reported AUstunel2005 (). For example, the number of nodes of higher eigenmodes is expected to change as slack is increased. We anticipate that the reported SFM detection will be very useful to study NEMS devices made of other materials, such as graphene Bunch () or microfabricated semiconducting Illic () resonators. We thank J. Bokor, A.M. van der Zande, J. Llanos, and S. Purcell for discussions. The research has been supported by an EURYI grant and FP6-IST-021285-2. ## References • (1) P. Poncharal, et al., Science 283, 1513 (1999) • (2) B. Reulet, et al., Phys. Rev. Lett. 85, 2829 (2000) • (3) M. Blencowe, Physics Reports 395, 159 (2004) • (4) R.G. Knobel, A.N. Cleland, Nature 424, 291 (2003) • (5) M.D. LaHaye, et al., Science 304, 74 (2004) • (6) B. Babic, et al., Nano Lett. 3, 1577 (2003) • (7) J.C. Meyer, M. Paillet, S. Roth, Science 309, 1539 (2005) • (8) K. Jensen, et al., Phys. Rev. Lett. 96, 215503 (2006) • (9) S.T. Purcell, et al., Phys. Rev. Lett. 89, 276103 (2002) • (10) V. Sazonova, et al., Nature 431, 284 (2004) • (11) H.B. Peng, et al., Phys. Rev. Lett. 97, 087203 (2006) • (12) B. Witkamp, M. Poot, and H.S.J. van der Zant, Nano Lett. 6, 2904 (2006). • (13) J. Kong, et al., Nature 395, 878 (1998) • (14) J.M. Bonard, et al., Adv. Mater. 9, 827 (1997) • (15) The SFM microscope is a Dimension 3100 from Veeco. The SFM tips from Olympus have a spring constant and a resonance frequency for the first mode. The amplitude setpoint of the topography feedback is set lower than the free amplitude ( nm). The time constant of the lock-in is about 10 ms. • (16) X.M.H. Huang, et al., Nature 421, 496 (2003) • (17) A.N. Cleland, Foundations of Nanomechanics (Springer, Berlin 2003) • (18) We did not observe a change of as is varied. This is attributed to the low and the short . For instance, to see a change for the device in Fig. 2(a), we estimate that should be larger than 13 V ASapmazPRB2003 (). • (19) S. Sapmaz, et al., Phys. Rev. B 67, 235414 (2003). • (20) R. Lefevre, et al., Phys. Rev. Lett. 95, 185504 (2005) • (21) A. Husain, et al., Appl. Phys. Lett. 83, 1240 (2003). • (22) T. Hertel, R.E. Walkup, and P. Avouris, Phys. Rev. B 58, 13870 (1998) • (23) W. Jing, Z. Ren, and C.T.C. Nguyen, IEEE Trans. Ferro. Freq. Control 51, 1607 (2004) • (24) We observed that depends linearly on . In addition, is expected to be linear with in the linear regime. This suggests that is linearly proportional to . • (25) K.L. Ekinci, M.L. Roukes, Rev. Sci. Instrum. 76, 061101 (2005) • (26) H. Ustunel, D. Roundy, and T.A. Arias, Nano Lett. 5, 523 (2005) • (27) A.H. Nayfeh, W. Kreider, T.J. Anderson, AIAA J. 33, 1121 (1995). • (28) J. Scott Bunch, et al., Science 315, 490 (2007) • (29) B. Ilic, S. Krylov, L.M. Bellan, H.G. Craighead, J. Appl. Phys. 101, 044308 (2007). You are adding the first comment! How to quickly get a good reply: • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made. • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements. • Your comment should inspire ideas to flow and help the author improves the paper. The better we are at sharing our knowledge with each other, the faster we move forward. The feedback must be of minimum 40 characters and the title a minimum of 5 characters
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8828885555267334, "perplexity": 1800.090241177945}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145316.8/warc/CC-MAIN-20200220224059-20200221014059-00341.warc.gz"}
https://brilliant.org/practice/number-theory-warmups-level-3-challenges/
× Back to all chapters # Number Theory Warmups If numbers aren't beautiful, we don't know what is. Dive into this fun collection to play with numbers like never before, and start unlocking the connections that are the foundation of Number Theory. # Number Theory Warmups: Level 3 Challenges One of the seven goblets above is made of real gold. If you start counting at A and wind back and forth while counting (A, B, C, D, E, F, G, F, E, D, ...), then the golden goblet would be the $$1000^\text{th}$$ one that you count. Which one is the golden goblet? $\Huge {\color{blue}9}^{{\color{green}8}^{{\color{red}7}^{{\color{brown}6} ^{\color{magenta}5}}}}$ What are the last two digits when this integer fully expanded out? Find the sum of all positive integers $$\displaystyle n$$, such that $$\displaystyle \dfrac{(n+1)^2}{n+7}$$ is an integer. Find the sum of all prime numbers $$p$$ such that $$p|\underset { p }{ \underbrace { 111\dots 1 } }$$. There is a prime number $$p$$ such that $$16p+1$$ is the cube of a positive integer. Find $$p$$. ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48579198122024536, "perplexity": 576.8324009309397}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105334.20/warc/CC-MAIN-20170819085604-20170819105604-00668.warc.gz"}
https://sciencematters.io/articles/201602000027
Why Matters What Matters Who Matters Articles Info arrow_drop_down ###### Your browser is out-of-date! Update your browser to view this website correctly. Update my browser now × # Magnetism of tryptophan and walk memory of proteins Biased or correlated random walk of proteins in the intracellular compartment may shape various cellular responses. We report sensitivity of protein walk to external static magnetic field that is equivalent to subjecting the protein solution to a spin perturbation. We explain this spin-response by super-paramagnetism of individual tryptophan residues present in a set of randomly chosen proteins. The correlated random walk system can thus facilitate magneto-sensing in biological system, a subject that is of some recent interest. Magnetic field induced altered phase space of proteins and magnetism of l-tryptophan. Aqueous solution of l-tryptophan was dried in absence (control) and in presence (pre-incubated) of 0.2 Tesla magnetic field. Magnetic analysis was performed using the dried sample. (a) Magnetization vs temperature (M-T) measurement of SMF pre-incubated (red lines) and control (blue lines) l-tryptophan. Solid and broken lines indicate Zero Field Cooling (ZFC) and Field Cooling (FC) profiles. (b) Magnetization vs Field (M-H) plot of l-tryptophan. Black arrows indicate the maximum positive magnetization as well as the threshold external field strength, beyond which reversal of magnetization occurs. From (c) to (k) are the phase space (XY) trajectories of proteins (BSA, ferritin and fibrinogen). Scattering based imaging was performed before, after 1 minute and after 5 minutes of 0.2 T magnetic field exposure. In each condition video imaging was performed for 1 minute. In each 1 minute video, 3000-3500 particles were imaged. The X, Y coordinates of each particle was used to construct the pseudo-colored images (see material methods for the algorithm). The pseudo-colored images contain 10,000 grids (100X100) and white pixels indicate the temporal evolution of the X, Y coordinates. (c, d, e) are for BSA, before (c), after 1 minute (d) and after 5 minutes (e) of magnetic field exposure. Similarly, in same order (f, g, h) and (i, j, k) are for ferritin and fibrinogen respectively. The presence of quantum coherence in biological system came as a surprise , as it implied presence of large scale spatio-temporal coherence at physiological conditions. This is unlike the presence of quantum tunnelling like events reported in certain biomolecular interactions[1] or electron hopping in electron transfer reactions[2] , reported decades back. The counterintuitive part of the new and challenging discipline of quantum biology is not centred around whether quantum effects exists in biosystems. It follows rather trivially that the answer to that question is affirmative[3] . What remains unexplained is how large scale quantum events can persist. A strong bias in this perspective is to postulate a few specialized proteins like cryptochromes[4][5] which can serve as an amplifier of quantum events. In this paper we have looked into the the matter from an independent perspective. Can correlated random walk serve as an amplifier of small perturbations and behave also as a memorizing machinery. We have studied the random walk patterns of a magnetic and two nonmagnetic proteins and shown how irrespective of their magnetic property they can memorize exposure to static magnetic field (SMF). We propose this as a spin memory of the protein ensemble. The paradigm shift that is needed in explaining the quantum coherence may lie in such correlated random walk of proteins, when they are translated, modified or undergo streaming or degradation in the cell. This view may be alternative to the proposed view that radical pair mechanism or triplet fusion present in some acceptor-donor combine cause the large scale coherence. The present paper considers the question of magneto sensing and magnetic memory in biomolecular interaction. As our understanding of cellular communication is based on electrical , chemical or even vibrational components , the existence of a new interaction component (superparamagntism) at the level of amino acids may substantiate our insights in biomolecular interaction and cellular communication process. First of all, we measured the magnetic properties of tryptophan with and without magnetic pre-incubation. Figure 1 (a) and (b) shows the super paramagnetic behavior of tryptophan and field dependent dynamic magnetic transition of the same respectively. Figure 1 (a) shows the zero-field (solid line) and with field (broken line) magnetization profile of trp at different temperatures. The super paramagnetic nature of the amino acid is confirmed by presence of a Neel temperature (blocking temperature) $T_B$ (indicated by arrow in Figure 1a) [6] at which $rac{delta M}{delta T}$ changes sign from positive to negative value. The red and blue profiles compare samples pre-exposed and unexposed to SMF. Incidentally, the existence of $T_B$ for tryptophan (approx 110°K) asserts its super paramagnetic behavior, this being reported for the first time. It may be further noted that blocking temperature for field exposed and unexposed samples are identical. Now, $T_B$ is a product of magnetic anisotropy and domain volume. As magnetization of the exposed sample (zero field, red line) shows a higher magnetization at$T_B$, a higher magnetic anisotropy is expected in the said case, and this has to be compensated by reduction of magnetic volume as result of the SMF exposure. The magnetization vs field (M-H) plot of tryptophan clearly indicates the retention of magnetization at zero field (see gray arrow of Figure 1b). Not only that, but a positive magnetization is also observed up to a threshold value of external field (see black arrow of Figure 1b). After that, negative magnetization is observed. The basis of the observed transition from positive to negative magnetization with increasing applied field needs further study. In metallic system (gold nanoparticles) this type of field dependent magnetization can be explained by interface effects[7] .We have tried to explain the probable mechanism of such phenomenon in nonmetallic system in the following section. But, our finding may provide the physical basis of a reported phenomenon[8] regarding the magnetic field induced optical memory of tryptophan. Interestingly, for proteins we did not find such differential optical behavior of tryptophan in response to magnetic field exposure. But, we found differential auto-correlation decay pattern in dynamic light scattering study after magnetic field exposure (data not shown). It was natural to question whether the static field had any effect on the protein structure. Any structural change would lead to changes in auto-correlation decay pattern. Our studies using Circular Dichroism (CD) or ANS fluorescence showed no structural change in the time scale of the study. This indicates that either such change is absent or the changes if any had an ultra short duration. To get insights we performed scattering based imaging of thin film containing solution of model proteins like Bovine Serum Albumin (BSA), Ferritin and Fibrinogen. BSA and fibrinogen are nonmagnetic in nature. The former one is roughly spherical in shape and later one is rod shaped. On the other hand ferritin is a non-heme magnetic protein perfectly spherical in shape. We imaged the solution before, after one minute and after five minutes of magnetic field exposure at room temperature. Then for analysis, we constructed an image plane, which is divided into 10,000 grids (100X100), where X-Y trajectories of the proteins are plotted. See materials and methods section for details of the experiment and m-code used to construct the image. Figure 1, (c)-(k) respectively represents the X-Y trajectories of proteins before exposure (c, f and i), after 1 minute exposure (d, g and j) and after 5 minute exposure (e, h and k) of 0.2 Tesla static magnetic field. ‘c’ (before), ‘d’ (after 1 min) and ‘e’ (after 5 min) indicates X-Y trajectories of BSA and similarly in same order (f), (g) and (h) and (i), (j) and (k) are for Ferritin and fibrinogen respectively. Before magnetic field exposure the phase space is isotropic (space filling [see c, f and i]). A little bit anisotropy in space is observed after 1 minute field exposure (see d, g and j) and after 5 minutes of exposure the effect become more prominent for BSA and Ferritin (see e and h). For Fibrinogen the effect is not so prominent (compare k with e and h), but a rotation in the direction of the trajectories can be clearly observed (compare j and k with i). For quantification of the extent of anisotropic phase space we calculated an index and assigned as Space Inhomogeneity Index (SII). SII is the ratio of space filled by the proteins and space filled by random distribution of same number of events. The random points generated by rand function of Matlab were used to filled the image plane having same matrix size (100X100). The ratio between the white pixels filling the matrix was assigned as SII. Table 1 shows the SII values of proteins before and after magnetic field exposure. The SMF induced anisotropic phase space can be compared from Table 1. SII value close to 1 means the space is filled homogenously and lower value of SII indicates inhomogeneity in space. That means, SMF induces inhomogeneity in phase space of particle motion (anisotropic movement). Table 1: Space Inhomogeneity Index (SII) of the proteins (BSA, ferritin and fibrinogen) before, after 1minute and after 5 minute of magnetic field exposure. Proteins SII (before magnetic exposure) SII (1 minute magnetic exposure) SII (5 minute magnetic exposure) BSA 0.862 0.686 0.585 ferritin 0.911 0.528 0.337 fibrinogen 0.855 0.852 0.789 The effect of SMF on Brownian motion of proteins can be observed in naked eye from the Movie S1, S2 and S3. Effect of SMF on diffusion has been studied theoretically. As the proteins are charged (Q) particles in presence of field the Lorentzian force (Q X V) may restrict their motion. The direction of the force will be perpendicular to applied magnetic field direction and direction of the motion of the particle. Hence, a vortex like streaming may be observed. The resultant motion is intrinsically linked to the shape of the particle. The energy require to rotate a spherical particle is less that to rotate a particle with higher shape anisotropy. That is why for fibrinogen (rod shaped protein) we found attenuated response. Classically this type of phenomenon is explainable. But, explanation of the memory like effect that is persistence of space anisotropy after field withdrawal needs nonclassical treatment. We observed the phenomenon when the field is not present (after withdrawal of the field). The phenomenon can be summarized as memory of external spin perturbation within a spin system. So we look for a model which deals with spatial coherence between spin states and that is Ising model. Ernst Ising solved one dimensional equation to explain the spontaneous magnetization of ferromagnetic system. But, recently the model is being used to explain various biological phenomenon like protein aggregation[9] , protein folding[10] , DNA-protein interaction[11] , etc. Ising model has been exploited in explaining various binding processes from last four decades, one important citation being[12] . The minimal Ising lattice, in which the Hamiltonian is given by, (1) $H= -sum{ \ll ij \gg} J_{ij} \sigma_i \sigma_j - 2 \beta \sum{i} \sigma_i$ In the equation (1) 'σ' can have two values (+/-1) depending on the type of interaction. The first term in the RHS represents the coupling between neighboring spins that can exist in absence of a field. The second term involving 'β' is the contribution due to Bohr magneton that exists only in presence of any external magnetic field B. First term is responsible for spontaneous magnetization in a ferromagnetic system in absence of field (B=0). For biomolecules the interaction energy between individual spin is not sufficient for spontaneous magnetization in absence of field. But, higher order structure (peptide bonds and pi-pi interaction between aromatic amino acids) of biomolecules may form a spin coherent domain. And if we assume these domains as individual spin sates then the interaction between domains may result in very low magnetization as the exchange interaction is not as strong as ferromagnetic interaction. In this context, we also have to assume that the coupling coefficient ‘J’ is a function of ‘H’ (external field) and involve other degrees of freedom (higher order structure) along with spin with a nonlinear expression. Now, in presence of magnetic field, interaction energy between domains (first term of RHS of Eq (1) and between individual domain and external magnetic field (second term of RHS of Eq 1) competes with each other. The differential contribution of these two types of interactions may result in field dependent differential magnetism of the system. For the first part of RHS of Eq (1), σ =1, as the interaction between domains is energetically favorable and results in three dimensional structures of proteins with global minima. But the value of σ for the second part of RHS of Eq (1) is negative (-1), which is governed by the diamagnetic anisotropy of the system. At low field the first part predominates over the diamagnetic part and results in a positive magnetization (ferromagnetism) but at high field the diamagnetic contribution predominates and negative magnetization is found (see the M-H plot of tryptophan in Figure 1 b). This field dependent dynamic magnetic property and onset of ferromagnetism (due to coherent interaction between identical domain) is the basis of memory (walk memory of proteins) of magnetic field exposure in biosystems. The maximum memory found for Ferritin, which is iron oxide containing magnetic protein and shows permanent magnetization (very low magnetic moment at room temperature). This also proves that the observed memory is related to the generation of positive magnetization at low magnetic field. It has been shown that the magnetism of ferritin core changes with iron content within the core[13] . But the physiological significance of such magnetic dynamism of a protein is still not clear. In this context we can say that modulation of intrinsic magnetic property of ferritin may be utilized to control the walk pattern of this protein in vivo. Our findings highlight that even proteins can be subjected to magnetically controlled behavior, the memory component of the magnetic contribution depending on the protein shape and dimensionality. It may be noted that simple diamagnetic properties would not suffice to explain the memory aspect. The observation may have important implications in another context. The presence of super paramagnetism in the individual amino acid scale and emergence of a ferromagnet like memory (at low field) manifested in the random walk proteins in solutions and also in protein motion in live cells (revised manuscript under preparation) implies presence of a temporally stable long term spatial coherence in physiological conditions. This inspired a direct exploitation of the Ising model. In addition, it has been shown that the instant response of live cells towards magnetic field is altered sub-cellular streaming. In quantum biology some approaches have been sensing weak field[14][15] and events triggered by extreme small scale temporal rhythms often termed as quantum beats[16] .Presence of large spatial coherence that is reflected in random walk for a variety of proteins, has rarely been discussed. The observation may therefore be contextual with respect to deciphering many cellular communication processes, where very rarely the importance of spatial coherence is discussed. Recently, research groups are interested in studying the functional aspects of membrane less protein assemblies within cell[17] . They are being formed by liquid-liquid phase transition. In this context our observation may be helpful to conceptualize the role of magnetic interaction behind the origin of formation and stability of such liquid droplet. The superparamagnetic behavior of tryptophan is reported for the first time. Protein motion in solution thus show a magnetic field dependent memory of walk pattern. The finding may provide an alternative module of biological magneto-sensing , that is often described by the radical pair mechanism. The future perspective of the work will be to understand the magnetic response of of normal and cancer cells. As the observed magnetic response is protein specific then the different cell types may also response in different way. If we get insights about the exact mechanism of translating such physical force by live cells, then we will be able to use such forces for therapeutic approaches. L-tryptophan, Ferritin and Fibrinogen were purchased from Sigma. The Bovine Serum Albumin (BSA) was purchased from SRL. Millipore water was used to prepare stock solution and dilution. Protein solutions were prepared using 100 mM phosphate buffer of pH 7.22. The permanent Neodymium magnets were purchased from Rare Earth Magnetics, India. The field strength was measured using a locally made Gaussmeter (Neoequipments, India). Magnetic measurement of tryptophan: 20 mg L-tryptophan was dissolved in 10 ml of MilliQ water (basic). The prepared 10 ml tryptophan solution was divided into two parts and dried using a lyophillizer, one part in absence of magnetic field and other one in presence of 0.2 Tesla magnetic field. Then the magnetic measurement of the dried sample was performed in Vibrational Sample Magnetometer (Quantum Design, MPMS 7). Both samples were heated to 300K in presence of 100 Oesterd field and then cooling up to 4K was performed in presence (field cooling, FC) and in absence (zero field cooling, ZFC) of magnetic field. Magnetization vs Field (M-H) study was performed at 298 K using externally applied oscillatory magnetic field varying from -1 to +1 Tesla. In Figure 1b -0.4 t0 +0.4 T (or -4000 to +4000 (X1000/4) Am/m) field range has been shown. Scattering based imaging of proteins motion: Scattering based imaging of proteins (BSA, ferritin and fibrinogen) were performed using Nanoparticle Tracking Analysis (NTA) system (NanoSight NS300, Malvern, United Kingdom) equipped with a high sensitivity CMOS camera and 532 nm laser. All measurements were performed at room temperature. NTA 2.3 software was used to store the imaging data as ASCI file as well as in AVI video format. The ASCI file contains particle number (ID), X, Y coordinates of that particle and the time of the particle tracking. 0.1 mg/ml protein solution was imaged before, after 1 minute and after 5 minutes of magnetic field exposure. At first we took the sample in a thin film sample holder and imaged for 1 minute. Then we kept a magnet on the top of the sample film for 1 minute and again imaged for 1 minute after removing the magnet. We performed the same step with 5 minutes magnetic incubation. We used the ASCI files for our analysis. We constructed a gray scale phase space trajectories of the proteins using Matlab (Mathworks, USA). M-code: function z=raja_grid(x,y,m,n); % m & n are sizes of the matrix (100X100) ik=find(x<0); x(ik)=[]; y(ik)=[]; ik=find(y<0); x(ik)=[]; y(ik)=[]; x=x/max(x); y=y/max(y); z=zeros(m,n); xl=linspace(min(x),max(x),m); yl=linspace(min(y),max(y),n); for i=1:m-1; for j=1:n-1 kk=find (x>=xl(i) & x<=xl(i+1) & y>=yl(j) & y<=yl(j+1) ); if isempty(kk)~=1 z(i,j)=z(i,j)+1; z(i,j)=255*(sum(I(kk))./max(I(kk))); end end end z=uint8(z); w(:,:,1)=z;w(:,:,2)=z;w(:,:,3)=z; %w is the gray scale output image. The work is supported by Centre of Excellence in Systems Biology and Biomedical Engineering, University of Calcutta. The authors thank Ms. Namrata Jain , Malvern Instrments, India for her excellent technical support. The work was done with commercially available proteins and did not involve any animal or human sample. No fraudulence is committed in performing these experiments or during processing of the data. We understand that in the case of fraudulence, the study can be retracted by Matters. 1. Don Devault Quantum mechanical tunnelling in biological systems Quarterly Reviews of Biophysics, 13/1980, page 387 DOI: 10.1017/s003358350000175xchrome_reader_mode 2. H. B. Gray, J. R. Winkler Long-range electron transfer Proceedings of the National Academy of Sciences, 102/2005, pages 3534-3539 DOI: 10.1073/pnas.0408029102chrome_reader_mode 3. Neill Lambert, Yueh-Nan Chen, Yuan-Chung Cheng,more_horiz, Franco Nori Quantum biology Nat Phys, 9/2012, pages 10-18 DOI: 10.1038/nphys2474chrome_reader_mode 4. Jayendra N. Bandyopadhyay, Tomasz Paterek, Dagomir Kaszlikowski Quantum Coherence and Sensitivity of Avian Magnetoreception 5. A. A. Lee, J. C. S. Lau, H. J. Hogben,more_horiz, P. J. Hore Alternative radical pairs for cryptochrome-based magnetoreception Journal of The Royal Society Interface, 11/2014, pages 20131063-20131063 DOI: 10.1098/rsif.2013.1063chrome_reader_mode 6. C. P. Bean, J. D. Livingston Superparamagnetism J. Appl. Phys., 30/1959, page S120 DOI: 10.1063/1.2185850chrome_reader_mode 7. S. Banerjee, S. O. Raja, M. Sardar,more_horiz, A. Dasgupta Iron oxide nanoparticles coated with gold: Enhanced magnetic moment due to interfacial effects J. Appl. Phys., 109/2011, page 123902 DOI: 10.1063/1.3596760chrome_reader_mode 8. Sufi O. Raja, Anjan K. Dasgupta Magnetic field induced self assembly and optical memory of pi-ring containing fluorophores Chemical Physics Letters, 554/2012, pages 163-167 DOI: 10.1016/j.cplett.2012.10.040chrome_reader_mode 9. Sorin Istrail Special RECOMB'99 Issue Journal of Computational Biology, 6/1999, pages 279-279 DOI: 10.1089/106652799318265chrome_reader_mode 10. E. R. Henry, R. B. Best, W. A. Eaton Comparing a simple theoretical model for protein folding with all-atom molecular dynamics simulations Proceedings of the National Academy of Sciences, 110/2013, pages 17880-17885 DOI: 10.1073/pnas.1317105110chrome_reader_mode 11. Vladimir B Teif, Karsten Rippe Statistical–mechanical lattice models for protein–DNA binding in chromatin Journal of Physics: Condensed Matter, 22/2010, page 414105 DOI: 10.1088/0953-8984/22/41/414105chrome_reader_mode 12. J.-P. Changeux, J. Thiery, Y. Tung, C. Kittel ON THE COOPERATIVITY OF BIOLOGICAL MEMBRANES Proceedings of the National Academy of Sciences, 57/1967, pages 335-341 DOI: 10.1073/pnas.57.2.335chrome_reader_mode 13. S Hilty Iron core formation in horse spleen ferritin: Magnetic susceptibility, pH, and compositional studies Journal of Inorganic Biochemistry, 56/1994, pages 173-185 DOI: 10.1016/0162-0134(94)85004-6chrome_reader_mode 14. K. Maeda, A. J. Robinson, K. B. Henbest,more_horiz, P. J. Hore Magnetically sensitive light-induced reactions in cryptochrome are consistent with its proposed role as a magnetoreceptor Proceedings of the National Academy of Sciences, 109/2012, pages 4774-4779 DOI: 10.1073/pnas.1118959109chrome_reader_mode 15. Siying Qin, Hang Yin, Celi Yang,more_horiz, Can Xie A magnetic protein biocompass 16. Gabor Vattay, Stuart Kauffman, Samuli Niiranen Quantum Biology on the Edge of Quantum Chaos 17. Clifford~p. Brangwynne, Peter Tompa, Rohit~v. Pappu Polymer physics of intracellular phase transitions Nat Phys, 11/2015, pages 899-904 DOI: 10.1038/nphys3532chrome_reader_mode
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 12, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6275400519371033, "perplexity": 2544.910061227808}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818691476.47/warc/CC-MAIN-20170925111643-20170925131643-00499.warc.gz"}
https://chemistry.stackexchange.com/questions/152539/why-is-potassium-less-dense-than-sodium
# Why is potassium less dense than sodium? Potassium has a density of $$\pu{0.86g/cm3}$$ and sodium has a density of $$\pu{0.97g/cm3}$$,even though potassium is below sodium and one might expect the alkali metals to exhibit monotonously increasing densities down the group. Why is this discrepancy the case? All of the answers I found were either unrelated or vague. • Googling leads quickly to things like quora.com/… – Jon Custer Jun 6 at 2:36 • I don't know why you were downvoted, although phrasing of the question could definitely be improved. I could not find an answer on the site (although perhaps should have looked a little longer, sometimes answers are hidden within posts that are actually asking something else). The answer (obviously) has to do with differences in electronic structure, and the presence of low energy 3d orbitals. It would be nice to see a more detailed answer to this, it is unlikely to be simple. To get to the heart of the question you need to look into the electronic structure of atoms. – Buck Thorn Jun 6 at 9:36 • @JonCuster after some 20 minutes searching I could not find a particularly good answer (other than trivial ones, or something like "3d orbitals are available" which is next to trivial). Answers on other sites are often terrible. A good answer might however require some discussion of electronic structure beyond an introductory chem 'level. – Buck Thorn Jun 6 at 9:44 • – Loong Jun 6 at 14:19 • @BuckThorn I doubt it is electronic structure, more likely it's just that K is just lighter than you expect - after all it is one of the elements that's put in the wrong place if you order things by atomic weight, being lighter than Argon. And I agree "3d orbitals are available" is dreadful, I see that more of an explanation for K to be denser than expected, due to the 3d contraction – Ian Bush Jun 6 at 14:36 Although density may be a relatively easily measurable property of solid materials, it may not suggest the most fundamental relationship between mass and volume for the elements. Molar volume (the volume required to contain $$6 \times 10^{23}$$ atoms of each the elements) is pictured in the graph below (WebElements): It is amazing that ~100 elements can be arranged in a table with so many regularities - and even the irregularities can be explained if you try hard enough. There are some that are more unexpected and greater than the $$\ce{Na-K}$$ difference. It is good that both $$\ce{Na}$$ and $$\ce{K}$$ have the same crystal structure (body-centered cubic). To explain the $$\ce{Na-K}$$ difference, some other differences should also be taken into account. Both have one electron outside a completed shell system - but sodium's $$\mathrm{3s}$$ electron (2 nodes) is dealing with a different kind of shielding compared to potassium's $$\mathrm{4s}$$ electron (3 nodes). In fact, the outer $$\mathrm{3s}$$ electron of sodium is ionized at $$\pu{5.139 eV} \ (\pu{496 kJ/mol})$$, while potassium is ionized more easily, at $$\pu{4.34 eV} \ (\pu{418.8 kJ/mol})$$. Since potassium holds onto its last electron so weakly (relative to sodium), the "sea of electrons" that helps hold the crystalline metal together isn't attracting the potassium ions as well as the sea does for sodium - not binding the crystal together as tightly. Result: lower density. Complete explanation? Probably not, but it's a start. Although the enthalpy of fusion for both $$\ce{Na}$$ and $$\ce{K}$$ is given as $$\pu{20.5 kJ/mol}$$, the melting points are $$\pu{63.38 ^\circ C}$$ of $$\ce{K}$$ and $$\pu{97.7 ^\circ C}$$ for $$\ce{Na}$$, suggesting that sodium is slightly more tightly bound in the crystal. Even the boiling points $$\pu{759 ^\circ C}$$ of $$\ce{K}$$ vs $$\pu{883 ^\circ C}$$ of $$\ce{Na}$$ suggest that sodium atoms are bound more tightly than potassium in the liquid. Interestingly, if you eliminate the $$\mathrm{3s}$$ electron of sodium and the $$\mathrm{4s}$$ electron of potassium, you drop to neon, which has a solid density of 1.444 and argon, with a solid density of 1.616. These two densities are about what you would expect; i.e., increasing as you go down the column. Note that the corresponding molar volumes are $$\pu{13.23 cm^3}$$ for $$\ce{Ne}$$ and $$\pu{22.56 cm^3}$$ for $$\ce{Ar}$$. The same trend is found in the molar volumes of $$\ce{Na}$$ $$(\pu{23.78 cm^3})$$ and $$\ce{K}$$ $$(\pu{45.94 cm^3})$$, and the "discrepancy" is obscured. Although I liked James Gaidis's answer, I do not agree with some of arguments because they are all parts of one or more continuous trends. For instance, look at the melting points and boiling points trends of alkali metals and other trends as illustrated in the following table: $$\begin{array}{l|ccc} \hline \bf{\text{Physical property}} & \ce{Li} && \ce{Na} && \ce{K} && \ce{Rb} && \ce{Cs} \\ \hline \text{Boiling point, }\pu{^\circ C} & 180.54 & \gt & 97.72 & \gt & 63.38 & \gt & 39.31 & \gt & 28.44 \\ \text{Melting point, }\pu{^\circ C} & 1342 & \gt & 883 & \gt & 759 & \gt & 688 & \gt & 671 \\ \text{Heat of fusion, }\pu{kJ mol-1} & 3.00 & \gt & 2.60 & \gt & 2.32 & \gt & 2.19 & \gt & 2.09 \\ \text{Heat of vaporization, }\pu{kJ mol-1} & 136 & \gt & 97.42 & \gt & 79.1 & \gt & 69 & \gt & 66.1 \\ \text{First ionization energy, }\pu{kJ mol-1} & 520.2 & \gt & 495.8 & \gt & 418.8 & \gt & 403.0 & \gt & 375.7 \\ \text{Density, }\pu{g cm-3} & 0.534 & \lt & 0.968 & \gt & 0.89 & \lt & 1.532 & \lt & 1.93 \\ \hline \end{array}\\ \text{Source of the data: https://en.wikipedia.org/wiki/Alkali_metal}$$ As shown in the above table, only density has broken the trend. Therefore the reasoning for this density inconsistance using other physical measures in the table is somewhat valid but not solid (see Buck Thorn's comments elsewhere). I'd like to give a different angle on this uneven trend. Density is generally defind as the mass of unit volume $$(\rho = \frac{m}{v})$$. When you consider the density of an element, it is better to consider the molar mass and molar volume in place of mass and volume of other solids, respectively. Since the question is about the density difference of sodium $$(\ce{Na})$$ and potassium $$(\ce{K})$$, let's concentrate on alkali metal group. As usual and easity understand, the mass of a metal increases going down on the group. It is also clear that the principle quantam number $$(n)$$ increases as going down the group and hence, volume should have increased as well (adding more shells). The question is, the rate of increment of mass is comparable with the rate of increment of volume in each element? Let's make a table for this trend: $$\begin{array}{l|ccc} \hline \bf{\text{Property}} & \ce{_{3}Li} & \ce{_{11}Na} & \ce{_{19}K} & \ce{_{37}Rb} & \ce{_{55}Cs} \\ \hline \text{Molar mass, }\pu{g mol-1} & 6.94 & 22.99 & 39.10 & 85.47 & 132.91 \\ \text{Percent mass increment} & - & \approx 231.3\% & \approx 70.1\% & \approx 118.6\% & \approx 55.5\% \\ \text{Proton/nutron} & 3/4 & 11/12 & 19/20 & 37/48 & 55/78 \\ \text{Calculated atomic radii, }\pu{pm} & 167 & 190 & 243 & 265 & 298 \\ \text{measured covalent radii, }\pu{pm} & 145 & 180 & 220 & 235 & 260 \\ a\text{ (Side of the unit cell), }\pu{pm} & 351 & 429.6 & 532.8 & 598.5 & 614.1 \\ \text{Atomic radii from a, }\pu{pm} & 152 & 186 & 231 & 242 & 266 \\ \text{Percent volume increment (exp)} & - & \approx 22.4\% & \approx 24.2\% & \approx 4.8\% & \approx 9.9\% \\ \hline \end{array}\\ \text{Source of the data: https://en.wikipedia.org/wiki/Atomic_radius}$$ In the table, the atomic radii computed from theoretical models (calculated atomic radii) are from published work of Enrico Clementi and others (Ref.1) while the measured covalent radii for the metals are from published work of J. C. Slater (Ref.2). The experimental mattalic radii are from published crystal data. As illustrated in the above table, when going from $$\ce{Li}$$ to $$\ce{Na}$$, the mass increased by $$231\%$$ but volume increased by only $$22\%$$. Thus, without a doubt, $$\ce{Na}$$ should have higher density than that of $$\ce{Li}$$. Similarly, we can argue that same principle would apply when going from $$\ce{K}$$ to $$\ce{Rb}$$ and going from $$\ce{Ru}$$ to $$\ce{Cs}$$ to explain the trend. Howevr, to explain the broken trend when going from $$\ce{Na}$$ to $$\ce{K}$$, we shoul look at the rates. When going from $$\ce{Li}$$ to $$\ce{Na}$$ and going from $$\ce{Na}$$ to $$\ce{K}$$, the percent volume change is basically the same (22% vs 24%). Yet, with that increment togethher with $$+131\%$$ mass increment (more than 2.3 time increment) has given only $$0.964 - 0.534 = 0.43$$ density change. Therefore, one can expect much less density increment (if there is any) when going from $$\ce{Na}$$ to $$\ce{K}$$ because the mass increment is minimal (less than 100%) when compared to the percent volume increment. On the other hand, when going down through a group, each period of the periodic table would add one shell. However, when going down from $$\ce{Li}$$ to $$\ce{Na}$$ and from $$\ce{Na}$$ to $$\ce{K}$$, the relavent nucleous gets only 7 protons. After that the addition of protons increases significantly (from $$\ce{K}$$ to $$\ce{Rb}$$, gains 18 protons and $$\ce{Rb}$$ to $$\ce{Cs}$$, gain another 18 protons). Near equal percent volume increment in either case suggests that the effects of opposite charge attractions and electron repulsions ate minimal in both cases. Bottom line is there is a clear discripancy of percent mass increment and percent volume increment at transition from $$\ce{Na}$$ to $$\ce{K}$$ stage. I have calculated the percent volume increment using experimental crystal data as follows: All of alkali metals crystallize with same crystal packing called body-centered cubic (bcc), which consists of one atom in the center of the cube and eight other atoms at the eight corners of the cube surrounding the cetral atom, each of them touching the central atom. Thus, the dioganal of the cube is equal to $$4r$$ where $$r$$ is the radius of the atom. If the length of each side of the cube is $$a$$, then diagonal is equal to $$\sqrt{3}a$$. Thus, $$r = \frac{\sqrt{3}a}{4} \tag1$$ Keep in mind that $$a$$ is depend on the atomic radius of each metal and how closely they packed against each other against some concerning forces (e.g., van der Waal's). For instance, it has been reported that the closest $$\ce{Na-Na}$$ separation is $$\pu{372 pm}$$ implying a sodium metallic radius of $$\pu{186 pm}$$ while the closest $$\ce{K-K}$$ separation is $$\pu{461 pm}$$ implying a potassium metallic radius of $$\pu{231 pm}$$ (as both calculated by using the equation $$(1)$$). The molar volume of the metal is not just the calculated volume of an atom considering it is a sphere times Avegadro number. The void volume between the atoms must be put into concern, packing is a big factor in volume calculations, thus density of the metal. The following calculations would show you the effect of the packing on the density: If you inspect closely, you would realize that each of eight corner atoms shares with eight unit cells while each center atom shares with only one unit cell. Thus, total atoms per unit cell is $$8 \times \frac{1}{8} + 1 = 2$$. The volume of the unit cell is $$a^3$$, which is considered to be the volume of $$\pu{2 atoms}$$. Now, you can calculate the molar volume of each metal ($$V_\ce{M}$$): $$V_\ce{M} = a_\ce{M}^3 \times \frac{1}{\pu{2 atoms}} \times N_A \tag2$$ The units of the volume would be depend on the units you have used for $$a$$. For instance, for $$\ce{Na}$$: $$V_\ce{Na} = (\pu{429.6 pm})^3 \times \left(\frac{\pu{1 cm}}{\pu{10^10 pm}}\right)^3 \times \frac{1}{\pu{2 atoms}} \times \pu{6.022 \times 10^{23} atoms\:mol-1} \\ = (429.6)^3 \times \pu{3.011 \times 10^{-7} cm3 mol-1} = \pu{23.87 cm3 mol-1}$$ Similarly, for $$\ce{K}$$ and other alkali metals: $$V_\ce{K} = (532.8)^3 \times \pu{3.011 \times 10^{-7} cm3 mol-1} = \pu{45.54 cm3 mol-1}$$ $$V_\ce{Li} = (351)^3 \times \pu{3.011 \times 10^{-7} cm3 mol-1} = \pu{13.02 cm3 mol-1}$$ $$V_\ce{Rb} = (558.5)^3 \times \pu{3.011 \times 10^{-7} cm3 mol-1} = \pu{52.45 cm3 mol-1}$$ $$V_\ce{Cs} = (614.1)^3 \times \pu{3.011 \times 10^{-7} cm3 mol-1} = \pu{69.73 cm3 mol-1}$$ Once we know the molar volume of each metal, we can calculate the density ($$\rho$$) by using $$\rho=\frac{\text {molar mass}}{\text {molar volume}}$$: $$\rho_\ce{Li} = \frac{\pu{6.94 g mol-1}}{\pu{13.02 cm3 mol-1}} = \pu{0.533 g cm-3}$$ $$\rho_\ce{Na} = \frac{\pu{22.99 g mol-1}}{\pu{23.87 cm3 mol-1}} = \pu{0.963 g cm-3}$$ $$\rho_\ce{K} = \frac{\pu{39.10 g mol-1}}{\pu{45.54 cm3 mol-1}} = \pu{0.859 g cm-3}$$ $$\rho_\ce{Rb} = \frac{\pu{85.47 g mol-1}}{\pu{52.45 cm3 mol-1}} = \pu{1.630 g cm-3}$$ $$\rho_\ce{Cs} = \frac{\pu{132.91 g mol-1}}{\pu{69.73 cm3 mol-1}} = \pu{1.906 g cm-3}$$ These calculated values are consistence with the experimental values. Therefore, it is safe to say that the increasing trend of alkali metals are broken at $$\ce{Na}$$ and $$\ce{K}$$. After that, it again continues as follows: $$\rho_\ce{Li} \lt \rho_\ce{Na} \gt \rho_\ce{K} \lt \rho_\ce{Rb} \lt \rho_\ce{Cs}$$ References: 1. E. Clementi, D. L. Raimondi, and W. P. Reinhardt, "Atomic Screening Constants from SCF Functions. II. Atoms with 37 to 86 Electrons," J. Chem. Phys. 1967, 47(4), 1300–1307 (DOI: https://doi.org/10.1063/1.1712084). 2. J. C. Slater, "Atomic Radii in Crystals," J. Chem. Phys. 1964, 41(4), 3199–3205 (DOI: https://doi.org/10.1063/1.1725697). • BTW, thanks for the edit on my answer! I totally flubbed the bcc. – James Gaidis Jun 9 at 13:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 105, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7745680809020996, "perplexity": 628.1387041554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046155268.80/warc/CC-MAIN-20210805000836-20210805030836-00596.warc.gz"}
https://tex.stackexchange.com/questions/160137/is-there-any-way-to-get-something-like-pmatrix-with-customizable-grid-lines-betw?noredirect=1
# Is there any way to get something like pmatrix with customizable grid lines between cells? [duplicate] In the document I have to describe a series of transformations, made with a matrix. Each transformation works only on 2x2 or 1x1 block, so I want to visually select this block in the matrix like this: I can type the matrix using the pmatrix environment, but I don't know, how to draw the rectangle. What is the best way to achieve this? ## marked as duplicate by Werner, egreg, Svend Tveskæg, mafp, Martin SchröderFeb 12 '14 at 22:16 • Perhaps the following is helpful/sufficient/duplicate: Highlight elements in the matrix – Werner Feb 12 '14 at 20:32 • @Werner, the link you provided was indeed very helpful. I managed to edit the code, provided in the question you linked to do what I wanted. For the sake of reference I wrote thus obtained code in the answer below. Thank you. – fiktor Feb 12 '14 at 21:44 My question was indeed close to duplicate as was hinted by @Werner . For the sake of reference I provide the code, which draws what I wanted. The code was created after analyzing the answer, linked by @Werner. \begin{tikzpicture}[baseline=(current bounding box.center)] \matrix [matrix of math nodes,left delimiter=(,right delimiter=)] (m) { \!1 & 0 & 0\!\!\! \\ \!0 & {P_\theta \otimes P} & 0\!\!\! \\ \!0 & 0 & 0\!\!\! \\ }; \draw (m-1-1.north west) -- (m-1-3.north west) -- (m-3-3.north west) -- (m-3-1.north west) -- (m-1-1.north west); \end{tikzpicture} The following code should be included in the preamble. \usepackage{tikz} \usetikzlibrary{arrows,matrix,positioning} This gives the following.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9385836720466614, "perplexity": 1425.5240239264929}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987828425.99/warc/CC-MAIN-20191023015841-20191023043341-00389.warc.gz"}
https://quantixed.org/2018/06/27/pentagrammarspin-why-twelve-pentagons/
# Pentagrammarspin: why twelve pentagons? This post has been in my drafts folder for a while. With the World Cup here, it’s time to post it! It’s a rule that a 3D assembly of hexagons must have at least twelve pentagons in order to be a closed polyhedral shape. This post takes a look at why this is true. First, some examples from nature. The stinkhorn fungus Clathrus ruber, has a largely hexagonal layout, with pentagons inserted. The core of HIV has to contain twelve pentagons (shown in red, in this image from the Briggs group) amongst many hexagonal units. My personal favourite, the clathrin cage, can assemble into many buckminsterfullerene-like shapes, but all must contain at least twelve pentagons with a variable number of hexagons. The case of clathrin is particularly interesting because clathrin triskelia can assemble into a flat hexagonal lattice on membranes. If clathrin is going to coat a vesicle, that means 12 pentagons need to be introduced. So there needs to be quite a bit of rearrangement in order to do this. You can see the same rule in everyday objects. The best example is a football, or soccer ball, if you are reading in the USA. The classic design of football has precisely twelve pentagons and twenty hexagonal panels. The roadsign for football stadia here in the UK shows a weirdly distorted hexagonal array that has no pentagons. 22,543 people signed a petition to pressurise the authorities to change it, but the Government responded that it was too costly to correct this geometrical error. So why do all of these assemblies have 12 pentagons? In the classic text “On Growth and Form” by D’Arcy Wentworth Thompson, polyhedral forms in nature are explored in some detail. In the wonderfully titled On Concretions, Specules etc. section, the author notes polyhedral forms in natural objects. One example is Dorataspis, shown left. The layout is identical to the D6 hexagonal barrel assembly of a clathrin cage shown above. There is a belt of six hexagons, one at the top, one at the bottom (eight total) and twelve pentagons between the hexagons. In the book, there is an explanation of the maths behind why there must be twelve pentagons in such assemblies, but it’s obfuscated in bizarre footnotes in latin. I’ll attempt to explain it below. To shed some light on this we need the help of Euler’s formulae. The surface of a polyhedron in 3D is composed of faces, edges and vertices. If we think back to the football the faces are the pentagons and hexgonal panels, the edges are the stitching where two panels meet and the vertices are where three edges come together. We can denote faces, edges and vertices as f, e and v, respectively. These are 2D, 1D and zero-dimensional objects, respectively. Euler’s formula which is true for all polyhedra is: $$f – e + v = 2$$ If you think about a cube, it has six faces. It has 12 edges and 8 vertices. So, 6 – 12 + 8 = 2. We can also check out a the football above. This has 32 faces (twelve pentagons, twenty hexagons), 90 edges and sixty vertices. 32 – 90 + 60 = 2. Feel free to check it with other polyhedra! Euler found a second formula which is true for polyhedra where three edges come together at a vertex. $$\sum (6-n)f_{n} = 12$$ in this formula, $$f_{n}$$ means number of n-gons. So let’s say we have dodecahedron, which is a polyhedron made of 12 pentagons. So $$n$$ = 5 and $$f_{n}$$ = 12, and you can see that $$(6-5)12 = 12$$. Let’s take a more complicated object, like the football. Now we have: $$((6-6)20) + ((6-5)12) = 12$$ You can now see why the twelve pentagons are needed. Because 6-6 = 0, we can add as many hexagons as we like, this will add nothing to the left hand side. As long as the twelve pentagons are there, we will have a polyhedron. Without them we don’t. This is the answer to why there must be twelve pentagons in a closed polyhedral assembly. So how did Euler get to the second equation? You might have spotted this yourself for the f, e, v values for the football. Did you notice that the ratio of edges to vertices is 3:2? This is because each edge has two vertices at either end (it is a 1D object) and remember we are dealing with polyhedra with three edges at each vertex. so $$v = \frac{2}{3}e$$. Also, each edge is at the boundary of two polygons. So $$e = \frac{1}{2}\sum n f_{n}$$. You can check that with the values for the cube or football above. We know that $$f = \sum f_{n}$$, this just means that the number of faces is the sum of all the faces of all n-gons. This means that: $$f – e + v = 2$$ Can be turned into $$f – (1/3)e = \sum n f_{n} – \frac{1}{6}\sum n f_{n} = 2$$ Let’s multiply by 6 to get, oh yes $$\sum (6-n)f_{n} = 12$$ There are some topics for further exploration here: • You can add 0, 2 or 10000 hexagons to 12 pentagons to make a polyhedron, but can you add just one? • What happens when you add a few heptagons into the array? Image credits (free-to-use/wiki or): Clathrus ruber – tineye search didn’t find source. HIV cores – Briggs Group Exploded football – Quora The post title comes from “Pentagrammarspin” by Steve Hillage from the 2006 remaster of his LP Fish Rising This site uses Akismet to reduce spam. Learn how your comment data is processed.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5505052804946899, "perplexity": 1070.4785995369748}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657134758.80/warc/CC-MAIN-20200712082512-20200712112512-00308.warc.gz"}
http://www.distance-calculator.co.uk/towns-within-a-radius-of.php?t=Ambes&c=France
Cities, Towns and Places within a 60 mile radius of Ambes, France Get a list of towns within a 60 mile radius of Ambes or between two set distances, click on the markers in the satelitte map to get maps and road trip directions. If this didn't quite work how you thought then you might like the Places near Ambes map tool (beta). The radius entered failed to produce results - we reset it to a max of 60 miles - apologies for any inconvenience # Distance Calculator > World Distances > Radius Distances > Ambes distance calculator > Ambes (France ) Radius distances Get towns between and Miles KM Click to View Visual Places on Map or View Visual Radius on Map miles Showing 0100 places between 10 and 60 miles of Ambes (Increase the miles radius to get more places returned *) < 25 miles > 25 miles Arcins, France is 10 miles away Bacalan, France is 10 miles away Blaye, France is 10 miles away Bruges, France is 10 miles away Fort Medoc, France is 10 miles away Le Taillan, France is 10 miles away Le Taillan-medoc, France is 10 miles away Lormont, France is 10 miles away Marcenais, France is 10 miles away Saint Yzan, France is 10 miles away Saint-savin-de-blay, France is 10 miles away Saint-seurin-de-cursac, France is 10 miles away Saint-sulpice-et-cameyrac, France is 10 miles away Saint-yzan-de-soudiac, France is 10 miles away Tarnes, France is 10 miles away Verac, France is 10 miles away Avensan, France is 11 miles away Beychac, France is 11 miles away Beychac-et-caillau, France is 11 miles away Cameyrac, France is 11 miles away Eysines, France is 11 miles away Izon, France is 11 miles away Le Bouscat, France is 11 miles away Lugon, France is 11 miles away Martins, France is 11 miles away Mazion, France is 11 miles away Montussan, France is 11 miles away Perissac, France is 11 miles away Saint-genes-de-blaye, France is 11 miles away Saint-germain-de-la-riviere, France is 11 miles away Saint-martin-lacaussade, France is 11 miles away Artigues, France is 12 miles away Artigues-pres-bordeaux, France is 12 miles away Campugnan, France is 12 miles away Cartelegue, France is 12 miles away Cauderan, France is 12 miles away Cenon, France is 12 miles away Coudot, France is 12 miles away Eyrans, France is 12 miles away Eyrans-de Soudiac, France is 12 miles away Generac, France is 12 miles away Laruscade, France is 12 miles away Medrac, France is 12 miles away Moulis-en-medoc, France is 12 miles away Saint-michel-de-fronsac, France is 12 miles away Saugon, France is 12 miles away Tizac-de-lapouyade, France is 12 miles away Tresses, France is 12 miles away Villegouge, France is 12 miles away Bordeaux, France is 13 miles away Burdigala, France is 13 miles away Castelnau-de-medoc, France is 13 miles away Floirac, France is 13 miles away Galgon, France is 13 miles away Lapouyade, France is 13 miles away Le Coudonneau, France is 13 miles away Le Haillan, France is 13 miles away Pompignac, France is 13 miles away Saint-androny, France is 13 miles away Saint-ciers-d abzac, France is 13 miles away Saint-martin-du-bois, France is 13 miles away Saint-medard-en-jalles, France is 13 miles away Vayres, France is 13 miles away Berdot, France is 14 miles away Beychevelle, France is 14 miles away Bouliac, France is 14 miles away etauliers, France is 14 miles away Fargues-saint-hilaire, France is 14 miles away Listrac-medoc, France is 14 miles away Maransin, France is 14 miles away Merignac, France is 14 miles away Port-de-la-belle-etoile, France is 14 miles away Saint-julien-beychevelle, France is 14 miles away Salleboeuf, France is 14 miles away Talence, France is 14 miles away Vrillant, France is 14 miles away Anglade, France is 15 miles away Arveyres, France is 15 miles away Bedenac, France is 15 miles away Begles, France is 15 miles away Bonnetan, France is 15 miles away Camarsac, France is 15 miles away Carignan, France is 15 miles away Fronsac, France is 15 miles away Les Billaux, France is 15 miles away Reignac, France is 15 miles away Bonzac, France is 16 miles away Bussac, France is 16 miles away Bussac-foret, France is 16 miles away Latresne, France is 16 miles away Libourne, France is 16 miles away Loupes, France is 16 miles away Pessac, France is 16 miles away Saint-denis-de-pile, France is 16 miles away Saint-germain-du-puch, France is 16 miles away Saint-martin-de-laye, France is 16 miles away Salaunes, France is 16 miles away Bayas, France is 17 miles away Braud, France is 17 miles away Braud-et-saint-louis, France is 17 miles away Click to See place names or View Visual Places on Map Click to go to the top or View Visual Radius on Map World Distances Need to calculate a distance for Ambes, France - use this Ambes distance calculator. To view distances for France alone this France distance calculator If you have a question relating to this area then we'd love to hear it! Chec out our facebook, G+ or Twitter pages above! Don't forget you can increase the radius in the tool above to 50, 100 or 1000 miles to get a list of towns or cities that are in the vicinity of or are local to Ambes. You can also specify a list of towns or places that you want returned between two distances in both Miles(mi) or Kilometres (km) . Europe Distances * results returned are limited for each query
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9001225829124451, "perplexity": 10007.321429994692}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347426801.75/warc/CC-MAIN-20200602193431-20200602223431-00074.warc.gz"}
http://www.computer.org/csdl/trans/tp/2005/04/i0482-abs.html
Publication 2005 Issue No. 4 - April Abstract - Feature Space Interpretation of SVMs with Indefinite Kernels Feature Space Interpretation of SVMs with Indefinite Kernels April 2005 (vol. 27 no. 4) pp. 482-492 ASCII Text x Bernard Haasdonk, "Feature Space Interpretation of SVMs with Indefinite Kernels," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 4, pp. 482-492, April, 2005. BibTex x @article{ 10.1109/TPAMI.2005.78,author = {Bernard Haasdonk},title = {Feature Space Interpretation of SVMs with Indefinite Kernels},journal ={IEEE Transactions on Pattern Analysis and Machine Intelligence},volume = {27},number = {4},issn = {0162-8828},year = {2005},pages = {482-492},doi = {http://doi.ieeecomputersociety.org/10.1109/TPAMI.2005.78},publisher = {IEEE Computer Society},address = {Los Alamitos, CA, USA},} RefWorks Procite/RefMan/Endnote x TY - JOURJO - IEEE Transactions on Pattern Analysis and Machine IntelligenceTI - Feature Space Interpretation of SVMs with Indefinite KernelsIS - 4SN - 0162-8828SP482EP492EPD - 482-492A1 - Bernard Haasdonk, PY - 2005KW - Support vector machineKW - indefinite kernelKW - pseudo-Euclidean spaceKW - separation of convex hullsKW - pattern recognition.VL - 27JA - IEEE Transactions on Pattern Analysis and Machine IntelligenceER - Kernel methods are becoming increasingly popular for various kinds of machine learning tasks, the most famous being the support vector machine (SVM) for classification. The SVM is well understood when using conditionally positive definite (cpd) kernel functions. However, in practice, non-cpd kernels arise and demand application in SVMs. The procedure of "plugging” these indefinite kernels in SVMs often yields good empirical classification results. However, they are hard to interpret due to missing geometrical and theoretical understanding. In this paper, we provide a step toward the comprehension of SVM classifiers in these situations. We give a geometric interpretation of SVMs with indefinite kernel functions. We show that such SVMs are optimal hyperplane classifiers not by margin maximization, but by minimization of distances between convex hulls in pseudo-Euclidean spaces. By this, we obtain a sound framework and motivation for indefinite SVMs. This interpretation is the basis for further theoretical analysis, e.g., investigating uniqueness, and for the derivation of practical guidelines like characterizing the suitability of indefinite SVMs. [1] D. Haussler, “Convolution Kernels on Discrete Structures,” Technical Report UCS-CRL-99-10, Univ. of California, Santa Cruz, 1999. [2] H. Lodhi et al., “Text Classification Using String Kernels,” J. Machine Learning Research, vol. 2, pp. 419-444, 2002. [3] C. Cortes, P. Haffner, and M. Mohri, “Rational Kernels,” Proc. Advances in Neural Information Processing Systems, vol. 15, 2003. [4] N. Cristianini and J. Shawe-Taylor, An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods. Cambridge, U.K.: Cambridge Univ. Press, 2000. [5] B. Schölkopf and A.J. Smola, Learning with Kernels. Cambridge, Mass.: MIT Press, 2002. [6] O. Chapelle and V. Vapnik, “Model Selection for Support Vector Machines,” Proc. Advances in Neural Information Processing Systems, pp. 230-236, 2000. [7] V. Vapnik, The Nature of Statistical Learning Theory. New York: Springer, 1995. [8] C. Bahlmann, B. Haasdonk, and H. Burkhardt, “On-Line Handwriting Recognition with Support Vector Machines— A Kernel Approach,” Proc. Eighth Int'l Workshop Frontiers in Handwriting Recognition, pp. 49-54, 2002. [9] D. DeCoste and B. Schölkopf, “Training Invariant Support Vector Machines,” Machine Learning, vol. 46, no. 1, pp. 161-190, 2002. [10] B. Haasdonk and D. Keysers, “Tangent Distance Kernels for Support Vector Machines,” Proc. 16th Int'l Conf. Pattern Recognition, pp. 864-868, 2002. [11] P.J. Moreno, P. Ho, and N. Vasconcelos, “A Kullback-Leibler Divergence Based Kernel for SVM Classification in Multimedia Applications,” Proc. Advances in Neural Information Processing Systems, vol. 16, pp. 1385-1392, 2004. [12] H. Shimodaira et al., “Dynamic Time-Alignment Kernel in Support Vector Machine,” Proc. Advances in Neural Information Processing Systems, vol. 14, pp. 921-928, 2002. [13] B. Schölkopf, “The Kernel Trick for Distances,” Technical Report MSR 2000-51, Microsoft Research, Redmond, Wash., 2000. [14] H.-T. Lin and C.-J. Lin, “A Study on Sigmoid Kernels for SVM and the Training of non-PSD Kernels by SMO-Type Methods,” technical report, Nat'l Taiwan Univ., Mar. 2003. [15] C.-C. Chang and C.-J. Lin, “LIBSVM: A Library for Support Vector Machines,” http://www.csie.ntu.edu.tw/~cjlinlibsvm, 2001. [16] M. Sellathurai and S. Haykin, “The Separability Theory of Hyperbolic Tangent Kernels and Support Vector Machines for Pattern Classification,” Proc. IEEE Int'l Conf. Acoustics, Speech, and Signal Processing, pp. 1021-1024, 1999. [17] B. Haasdonk and C. Bahlmann, “Learning with Distance Substitution Kernels,” Proc. 26th DAGM Symp., pp. 220-227, 2004. [18] T. Graepel et al., “Classification on Pairwise Proximity Data,” Proc. Advances in Neural Information Processing Systems, vol. 11, pp. 438-444, 1999. [19] E. Pekalska, P. Paclik, and R. Duin, “A Generalized Kernel Approach to Dissimilarity Based Classification,” J. Machine Learning Research, vol. 2, pp. 175-211, 2001. [20] L. Goldfarb, “A New Approach to Pattern Recognition,” Progress in Pattern Recognition 2, pp. 241-402, 1985. [21] X. Mary, “Hilbertian Subspaces, Subdualities and Applications,” PhD dissertation, INSA Rouen, 2003. [22] K.P. Bennett and E.J. Bredensteiner, “Duality and Geometry in SVM Classifiers,” Proc. 17th Int'l Conf. Machine Learning, pp. 57-64, 2000. [23] D.J. Crisp and C.J.C. Burges, “A Geometric Interpretation of nu-SVM Classifiers,” Proc. Advances in Neural Information Processing Systems, vol. 12, pp. 223-229, 2000. [24] B. Schölkopf et al., “New Support Vector Algorithms,” Neural Computation, vol. 12 pp. 1083-1121, 2000. [25] C.-C. Chang and C.-J. Lin, “Training $\nu\hbox{-}{\rm{Support}}$ Vector Classifiers: Theory and Algorithms,” Neural Computation, vol. 13, no. 9, pp. 2119-2147, 2001. [26] P.M. Pardalos and J.B. Rosen, Constrained Global Optimization: Algorithms and Applications. Berlin: Springer, 1987. [27] O. Ronneberger and F. Pigorsch, “LIBSVMTL: A Support Vector Machine Template Library,” http://lmb.informatik. uni-freiburg.de/lmbsoft libsvmtl/, 2004. [28] T. Graepel et al., “Classification on Proximity Data with LP-Machines,” Proc. Ninth Int'l Conf. Artificial Neural Networks, pp. 304-309, 1999. [29] M. Hein and O. Bousquet, “Maximal Margin Classification for Metric Spaces,” Proc. 16th Ann. Conf. Computational Learning Theory, pp. 72-86, 2003. Index Terms: Support vector machine, indefinite kernel, pseudo-Euclidean space, separation of convex hulls, pattern recognition. Citation: Bernard Haasdonk, "Feature Space Interpretation of SVMs with Indefinite Kernels," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 4, pp. 482-492, April 2005, doi:10.1109/TPAMI.2005.78
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8423706293106079, "perplexity": 7913.112423467105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345776833/warc/CC-MAIN-20131218054936-00095-ip-10-33-133-15.ec2.internal.warc.gz"}
https://papers.neurips.cc/paper/2018/file/9a0ee0a9e7a42d2d69b8f86b3a0756b1-Reviews.html
NIPS 2018 Sun Dec 2nd through Sat the 8th, 2018 at Palais des Congrès de Montréal Paper ID: 5112 Data-dependent PAC-Bayes priors via differential privacy ### Reviewer 1 **Summary and main remarks** The manuscript investigates data-dependent PAC-Bayes priors. This is an area of great interest to the learning theory community: in PAC-Bayesian learning, most prior distributions do not rely on data and there has been some effort in leveraging information provided by data to design more efficient / relevant priors. Classical PAC-Bayes bounds hold for any prior and the crux is often to optimize a Kullback-Leibler (KL) divergence term between a pseudo-posterior (a Gibbs potential of the form $\exp(-\lambda R_n(\cdot))\pi(\cdot)$) and the prior. The manuscript starts with a very convincing and clear introduction to the problem, and builds upon the paper Lever, Laviolette and Shawe-Taylor (2013). The intuition defended by the authors is that when using a data-dependent prior which is *robust* to data changes (loosely meaning that the prior is not crudely overfitting the data), then PAC-Bayesian bounds using this prior must be tighter than similar bounds with any prior. This is a clever direction and the use of differential privacy to address this more formally appears very relevant to me. A second contribution of the manuscript is the use of SGLD (Stochastic Gradient Langevin Dynamics) to elicit such data-dependent priors (section 5). This section closes with an important message, which is that the approximation found by SGLD still yields a valid PAC-Bayesian bound (Corollary 5.4). This is reassuring for practitioners as they benefit from the comforting PAC-Bayesian theory. **Overall assessment** I find the paper to be very well-written and with clever contributions to PAC-Bayesian learning, with a significant impact to the NIPS community. I have carefully checked the proofs and found no flaw. Since differential privacy is not my strongest suit, I have lowered my confidence score to 3. I recommend acceptance. **Specific remarks** - Some improper capitalization of words such as PAC, Bayesian, etc. in the references. - typo: end of page 4, "ogften" -> often - with the authors' scheme of proofs, it is unclear to me wether the limitation of having a loss bounded (by 1 for simplicity) is easy to relax. Since some effort has been put by the PAC-Bayesian community to derive generalization bounds for unbounded / heavy-tailed / non-iid data, perhaps a comment in section 2 (other related works) would be a nice addition to the manuscript. See for example the references Catoni, 2007 (already cited); Alquier and Guedj, 2018 (Simpler PAC-Bayesian bounds for hostile data, Machine Learning); and references therein. [Rebuttal acknowledged.] ### Reviewer 2 The authors provide PAC-Bayes bounds using differential privacy for data dependent priors. They further discuss the approximation of the differential private priors based on the stochastic gradient Langevin dynamics, which has certain convergence properties in 2-Wasserstein distance. They further connect such an approach to study the generalization bound of neural nets using PAC-Bayes, which seems interesting. However, it is not quite clear to me why this procedure is helpful in measuring the generalization bound (e.g., for the neural nets). Empirically, we can measure the empirical generalization gap, and theoretically the differential privacy based bound can be looser than the PAC-Bayes bound. Further discussion regarding this aspect will be helpful. After rebuttal: I thank the authors for their efforts to address my queries. The answers are helpful and I think the idea of connecting the PAC-Bayes and differential privacy is interesting. ### Reviewer 3 %% Summary %% This paper develops new PAC-Bayes bounds with data-dependent priors. Distribution-dependent PAC-Bayesian bounds (especially ones with distribution-dependent priors) are by now well-known. However, as far as I am aware, the authors' bound is the first non-trivial PAC-Bayesian bound where the prior is allowed to depend on the data. The key idea to accomplish this is the observation that a differentially private prior can be related to a prior that does not depend on the data, thereby allowing (as the authors do in their first main result) the development of a PAC-Bayesian bound that uses a data-dependent-yet-differentially-private prior based on the standard one that uses a data-independent prior. The proof of the new bound depends on the connection between max information and differential privacy (versions of the latter imply bounds on versions of the former). A second key result of the authors, more relevant from the computational perspective, is that if a differentially private prior is close in 2-Wasserstein distance to some other (not necessarily differentially private) prior, then a PAC-Bayesian bound can again be developed using this latter data-dependent-but-not-differentially-private prior, where we pick up some additional error according to the 2-Wasserstein distance. This result allows the authors to leverage a connection between stochastic gradient Langevin dynamics (which is computationally friendly) and Gibbs distributions (which are differentially private but computational nightmares). In addition to the above bounds, the authors also perform an empirical study. I have to admit that I focused more on the theoretical guarantees, as I believe they already are interesting enough to warrant publication of the paper. Also, the authors did a poor job in writing the Section 6, leaving vital details in the appendix, like figures which they constantly refer to. I feel that this was not in the spirit of abiding by the 8 page limit, as the paper was not self-contained as a result. This also was confusing, as the authors clearly had extra space throughout the paper to include these two figures. %% Reviewer's Expertise ** I am an expert in PAC-Bayesian inequalities and also well-versed in recent developments related to differential privacy. I am less familiar with stochastic gradient Langevin dynamics but know the basics. %% Detailed Comments %% Aside from Section 6, I found this paper to be remarkably well-written and clear. I especially appreciated the key insight in the paragraph immediately after equation (1.1), for why the Lever et al. bound often must be vacuous for large values of $\tau$. I believe that Theorem 4.2 is a very interesting (and not so complicated) result which really makes for a significant and original contribution. It should open up future directions in developing new PAC-Bayesian guarantees, and so this result alone I feel makes a compelling argument for accepting this paper. In addition, Theorem 5.3, which allows us to only require a prior that is close in 2-Wasserstein distance to a differentially private one, further broadens the PAC-Bayesian toolbox in a significant and original way. That said, Theorem 5.3 has a major weakness, which is that it forces us to give up on obtaining very high probability'' bounds, since (5.3) has a term which grows as $1/\delta'$ as the failure probability $\delta'$ decreases. This is a crucial weakness which the authors should mention after Theorem 5.3, including discussion of whether or not this issue is fundamental. I looked through the proof of the main results Theorems 4.2 and 5.3 and I believe the analysis is sound. I do think the authors made a typo in the definition of $g$ in Theorem 5.3; you should remove the negative sign (indeed the logarithm in (5.4) is not well-defined if the negative sign stays!). I would have liked a more useful/detailed version of what is currently Corollary 5.4. It currently glosses over too many details to really say much. Since you have extra space in the paper, I recommend including a more explicit version of this corollary. You should also include around this point a citation to the Raginsky / Rakhlin / Telgarsky (2017) paper that you refer to in the appendix. I also recommend giving, at least in the appendix, a precise citation of which result from their paper you are using. Minor comments: You never explain the notation for the squiggly arrow in the main paper. Please fix this. It is too important / widely used in the main text to be left to the appendix. On page 4, line 2, you say the above result''. I think you are referring to Definition 3.1, which is of course a definition, not a result. So, you should change the text accordingly. In Theorem 3.2, you have one instance of $n$ which should be replaced by $m$ (see the inline math following the word Then'') In the proof of Lemma D.1, you should mention that the upper bound of total variation by KL divergence is from Pinsker's inequality. Standard results'' is a bit vague to inform the uninformed reader. %% UPDATE AFTER AUTHOR'S RESPONSE %% I've read the author's rebuttal and their responses are satisfactory. I do hope you will highlight the weakness of the high probability bound involving the 1/\delta' dependence.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8731937408447266, "perplexity": 642.883601722619}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057913.34/warc/CC-MAIN-20210926175051-20210926205051-00360.warc.gz"}
http://www.nccr-swissmap.ch/research/publications/supersymmetric-affine-yangian
# The supersymmetric affine Yangian Monday, 20 November, 2017 ## Published in: arXiv:1711.07449 The affine Yangian of \mathfrak{gl}_1 is known to be isomorphic to {\cal W}_{1+\infty}, the W-algebra that characterizes the bosonic higher spin -- CFT duality. In this paper we propose defining relations of the Yangian that is relevant for the N=2 superconformal version of {\cal W}_{1+\infty}. Our construction is based on the observation that the N=2 superconformal {\cal W}_{1+\infty} algebra contains two commuting bosonic {\cal W}_{1+\infty} algebras, and that the additional generators transform in bi-minimal representations with respect to these two algebras. The corresponding affine Yangian can therefore be built up from two affine Yangians of \mathfrak{gl}_1 by adding in generators that transform appropriately. ## Author(s): Matthias R. Gaberdiel Wei Li Cheng Peng Hong Zhang
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9364778995513916, "perplexity": 2560.964499474401}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145767.72/warc/CC-MAIN-20200223093317-20200223123317-00016.warc.gz"}
http://math.stackexchange.com/questions/208470/poisson-integral-on-mathbbh-for-boundary-data-which-is-orientation-preservi
# Poisson integral on $\mathbb{H}$ for boundary data which is orientation-preserving homeomorphism of $\mathbb{R}$ Let $f$ be a real-valued function (in my case, an orientation-preserving homeomorphims of $\mathbb{R}$) on the real line $\mathbb{R}$ which is not in any $L^p$ -space. Let us take the simplest example $f(t)=t$. Is there a $\textit{direct or indirect}$ way of computing the harmonic extension $H(f)$ to $\mathbb{H}$ of such a function. For $\textit{direct}$ way, if I try to use the standard Poisson formula with the Poison kernel $p(z,t)=\frac{y}{(x-t)^2+y^2}, z= x+iy$, then I am getting $H(f)(i)=\infty$ for $f(t)=t$. But all the sources [Evans, PDE, p. 38 or wikipidea http://en.wikipedia.org/wiki/Poisson_kernel] for Poison formula for $\mathbb{H}$ assumes that the boundary map $f$ is in some $L^p, 1\leq p \leq \infty$. But then how should I compute $H(f)$ for very nice functions like $f(t)=t$ and make that equal to $H(f)(z)=z$? For $\textit{indirect}$ way, I know one solution to the problem could be to pass to the unit disk model $\mathbb{D}$ by a Möbius transformation $\phi$ that sends $\mathbb{H}$ to $\mathbb{D}$, and then solve the problem in $\mathbb{D}$, call the solution on disk $F$, and then take $\phi^{-1}\circ F \circ \phi :\mathbb{H}\to \mathbb{H}$. This does solve the problem for $f(t)=t$, but my concern is in general $\phi^{-1}\circ F \circ \phi$ may $\textit{not}$ be harmonic, because post-composition of harmonic with holomorphic is not harmonic in general. In that case, how would I solve this problem ? Answer or reference would be greatly appreciated ! - Is there necessarily always a harmonic extension? Why are you convinced there is? Note btw that homeomorphisms on $\mathbb R$ are not really the "nice" functions in this context... You're not doing topology here, you are trying to solve a PDE with given boundary conditions. So your well-behaved functions are those on which you know some bounds in whatever norm. –  Sam Oct 7 '12 at 1:42 Having said this, integration against a kernel really does not seem suitable here. But maybe some variant of the Perron method might still show the existence of a solution. –  Sam Oct 7 '12 at 1:50 @ Sam L. : I agree that we are not doing topology here, but there are some good theorems connecting topology and harmonic extensions,for example, harmonic extension of the unit circle homeomorphim is homeomorphism of the closed unit disk(Rado-Kneser-Choquet theorem). In fact, there are techniques in low-dimensional topology which use harmonic extension of circle homeomorphisms. But for calculational simplifications, sometimes it is easier to use $\mathbb{H}$-model, of course, only if the harmonic extension exists ! Thanks though. –  Mathmath Oct 7 '12 at 2:41 First of all, the harmonic extension of $f(t)=t$ cannot be $F(z)=z$ because the natural harmonic extension of a real function is real. You have to add $+iy$ "manually", it does not come from the Poisson kernel. (Unless you interpret the boundary data as $f(t)+i\delta_\infty$, which makes some sense but is likely to be more confusing than helpful.) Anyway, we have a legitimate question: given a orientation-increasing homeomorphism $f\colon \mathbb R\to\mathbb R$, how (and when) can we realize $f$ as boundary values of a real harmonic function $F(x,y)$ that is increasing with respect to $x$ for any fixed $y>0$? (The increasing property is what will make $F(x,y)+iy$ a homeomorphism.) It helps to look at the derivative $f'$, which is sure to exist at least as a positive Radon measure $\mu$. We want to get a positive harmonic function $u$ out of $\mu$; then $F$ will be a harmonic function such that $F_x=u$ (you can get $F$ by completing $u$ to holomorphic $u+i\tilde u$ and taking the antiderivative of that). Thus, the problem is reduced to getting a positive harmonic function out of a positive measure $\mu$ on the boundary. This is possible if and only if the Poisson integral of $\mu$ converges. That is, if the integral converges at one point, then it converges at all points and gives what we want. If it diverges, there is no such positive function, hence no harmonic homeomorphic extension. Two examples will illustrate the above. With $f(t)=t$ we have $f'=1$. The Poisson integral of $f'$ converges and gives $u=1$. Complete to holomorphic function (still $1$) and integrate: you get $z$. But if $f(t)=t^3$, then $f'(t)=3t^2$ and the Poisson integral of $f'$ diverges. There is no harmonic homeomorphic extension of this $f$. It extends to a harmonic (indeed holomorphic) map $f(z)=z^3$ which is not injective.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9643760323524475, "perplexity": 164.26204605946134}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246661675.84/warc/CC-MAIN-20150417045741-00081-ip-10-235-10-82.ec2.internal.warc.gz"}
https://community.wolfram.com/groups/-/m/t/897953
GROUPS: # [GIF] Modular (Point orbit under the action of the modular group) Posted 3 years ago 5312 Views | 5 Replies | 15 Total Likes | ModularIn the two-part talk she gave at the ICERM Workshop on Illustrating Mathematics, Katherine Stange talked about Kleinian groups – which are discrete subgroup of $PSL(2,\mathbb{C})$ – and the various beautiful pictures that come up when studying them. She mostly focused on Schmidt arrangements, which are the images of the real line under the action of a Bianchi group, which is a special type of Kleinian group. With that somehow in the back of my head, I set out to show the orbit of a point under the action of the modular group, otherwise known as $PSL(2,\mathbb{Z})$, on the hyperbolic plane. $PSL(2,\mathbb{Z})$ naturally acts on the half-plane model of the hyperbolic plane by fractional linear transformations, so that the matrix $\left(\begin{array} aa & b \\ c & d \end{array}\right)$ sends $z$ to $\frac{az+b}{cz+d}$. This is a discrete subgroup of the isometry group of the hyperbolic plane, which is just $PSL(2,\mathbb{R})$ acting by fractional linear transformations.Now, as you can see from the animation, I'm not using the upper half-plane model, I'm using the Poincaré disk model, not the upper half-plane model, so we need to translate appropriately.Since the Möbius transformation $f(z)=\frac{z-i}{z+i}$ sends the upper half-plane to the unit disk, we can conjugate the fractional linear transformation $z \mapsto \frac{az+b}{cz+d}$ by $f$ to see that the corresponding action on the Poincaré disk model is $z \mapsto \frac{\alpha z + \beta}{\bar{\beta}z+\bar{\alpha}}$ for any complex numbers $\alpha$ and $\beta$ with $|\alpha|> |\beta|$ (more precisely, $\alpha = a+d+i(b-c)$ and $\beta=a-d-i(b+c)$). Hence the following definition: PoincareMobius[{?_,?_}]:=(ReIm[(? #1+?)/(Conjugate[?] #1+Conjugate[?])]&)[Complex@@#1]& All of this just goes to show that you get a modular group element that acts on the Poincaré disk by choosing pairs $(\alpha, \beta)$ of Gaussian integers with $|\alpha| > |\beta|$. So that's how I'm getting the animation, though I'm doing it in a kind of stupid way: I'm iterating over the $L^\infty$ norm of $(\alpha, \beta)$, thought of as a point in $\mathbb{R}^4$, applying them to the point $(0,0)$, and (hopefully) throwing away duplicates. Of course, I could have also picked just a few generators and applied them recursively to the point, and maybe I will in the future.Here's the code which produces the still image shown below, which is the seventh frame from the animation with some slightly different coloring (to generate the frames in the animation, which have consistent coloring, I replaced #[[2]]/(4 n - 1) in the second argument of Blend with #[[2]]/51): Module[ {n = 6, cols = RGBColor /@ {"#EAEAEA", "#FF2E63", "#08D9D6", "#252A34"}, newindices, indices = {}, newpts, cleanedpts = {{{0, 0}, 0}}}, Do[ newindices = DeleteCases[ DeleteDuplicates[ Sort[Complement[Tuples[Range[-depth, depth], {4}], indices], Norm[#1, 1] > Norm[#2, 1] &]], {a_, b_, 0, 0} | {a_, b_, c_, d_} /; Norm[{c, d}] >= Norm[{a, b}]]; indices = Join[indices, newindices]; newpts = Table[{PoincareMobius[{a[[1]] + a[[2]] I, a[[3]] + a[[4]] I}][{0, 0}], Norm[a, 1]}, {a, newindices}]; cleanedpts = Join[cleanedpts, DeleteDuplicates[ Sort[newpts, #1[[2]] < #2[[2]] &], #1[[1]] == #2[[1]] &]]; , {depth, 0, n}]; Graphics[ { {PointSize[.03/If[#[[2]] == 0, 1, #[[2]]]], Blend[cols[[2 ;; -2]], #[[2]]/(4 n - 1)], Point[#[[1]]]} & /@ Reverse[cleanedpts], cols[[1]], Thickness[.003], Circle[] }, ImageSize -> 1080, PlotRange -> Sqrt[2], Background -> cols[[-1]] ] ] This takes about 20 seconds to run on my 5 year old MacBook Air, so it's definitely not fast, but that's surely due to horribly inefficient programming. 5 Replies Sort By: Posted 3 years ago - another post of yours has been selected for the Staff Picks group, congratulations !We are happy to see you at the top of the "Featured Contributor" board. Thank you for your wonderful contributions, and please keep them coming! Posted 3 years ago Very nice visualisation! On two occassions: Sort[..., Norm[#1, 1] > Norm[#2, 1] &] Could be sped up by: SortBy[..., Norm[#, 1]&] 25% improved on my laptop...In addition: DeleteDuplicatesBy[SortBy[newpts, #[[2]] &], First] Speeds it up another 10x times, 13x total or so...Moral of the story: if you compare two the same things in Sort, DeleteDuplicates, Gather et cetera use SortBy, DeleteDuplicatesBy, GatherBy, et cetera. those are a lot faster because it evaluates the criterion only once on all the elements, sorts those (very fast) and reorders the original. While e.g Sort compares each PAIR again and again, reevaluating the criterion. Because in theory you could have an 'asymmetrical' comparison: e.g. SortBy[list,First[#1]>Last[#2]&] (quite uncommon but it happens sometimes!). Posted 3 years ago Oh wow, thank you! I didn't even know about SortBy and DeleteDuplicatesBy, but that's obviously the way to go.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6463847160339355, "perplexity": 1719.7258223746767}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370494064.21/warc/CC-MAIN-20200329074745-20200329104745-00342.warc.gz"}
http://mathhelpforum.com/statistics/54960-probability-questions.html
1. ## Probability Questions Wasn't sure how to categorize these. 3. Calculate the probability of selecting a college student at random and finding out they have an IQ less than 60 if (a) the probability distribution of the college students IQs is N(64,5) and (b) if the probability distribution is uniform with endpoints 50 and 63. 8. Suppose a computer has 15 main components, each works or does not work independent of the others, with a probability of working equal to 0.8 for each. Now suppose the computer will not boot if 4 or more of the components do not work. Calculate the probability that the computer does not boot. 9. A computer has 1,500 switches, each working or not working independent of the others. Each switch has a probability of 0.6 of working. At least 915 switches must work properly or the computer will not boot. Calculate the probability that this computer boots. Any help at all would be appreciated. Thanks. 2. Originally Posted by AlphaOmegaStrife Wasn't sure how to categorize these. 3. Calculate the probability of selecting a college student at random and finding out they have an IQ less than 60 if (a) the probability distribution of the college students IQs is N(64,5) and (b) if the probability distribution is uniform with endpoints 50 and 63. [snip] (a) $Z = \frac{X - \mu}{\sigma} = \frac{60 - 64}{5} = -0.8$. Therefore $\Pr(X < 60) = \Pr(Z < -0.8) = \Pr(Z > 0.8) = 1 - \Pr(Z < 0.8)$ by symmetry. (b) The pdf of X is $f(x) = \frac{1}{13}$ for 50 < x < 63 and zero elsewhere. So $\Pr(X < 60) = \frac{1}{13} \, (60 - 50) = \frac{10}{13}$. 3. Originally Posted by AlphaOmegaStrife [snip] 8. Suppose a computer has 15 main components, each works or does not work independent of the others, with a probability of working equal to 0.8 for each. Now suppose the computer will not boot if 4 or more of the components do not work. Calculate the probability that the computer does not boot. [snip] Let X be the random variable number of components that don't work. X ~ Binomial(n = 15, p = 1 - 0.8 = 0.2). Calculate $\Pr(X \geq 4) = 1 - \Pr(X \leq 3)$. 4. Originally Posted by AlphaOmegaStrife [snip] 9. A computer has 1,500 switches, each working or not working independent of the others. Each switch has a probability of 0.6 of working. At least 915 switches must work properly or the computer will not boot. Calculate the probability that this computer boots. Any help at all would be appreciated. Thanks. Let X be the random variable number of components that don't work. X ~ Binomial(n = 1500, p = 0.6) Calculate $\Pr(X \geq 915)$. You can use the normal approximation to the binomial distribution.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9508201479911804, "perplexity": 533.8068603577887}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822747.22/warc/CC-MAIN-20171018051631-20171018071631-00576.warc.gz"}
https://forum.bebac.at/forum_entry.php?id=18371&amp;category=26&amp;order=time
## Suitable equipment? [Study As­sess­ment] Hi ssussu, » » […] Did you use an infusion pump or an infusion bottle / drip chamber? » » Yes, the burble is extruded by squeezing the tube to float the burble up from the liquid surface of the tube, like this » [image] » so, the dose is not lost. » » » » » 2. only if you lost nothing […] the Cmax and AUC will equal the one with a constant rate infusion. » » » » » » […] if the dose is not lost, can i say just the AUC will be effect but the Cmax is the same with the canstant rate infusion? » » » » Read again what I wrote above. » » So, you mean the Cmax and the AUC both will equal the one with a constant rate infusion? Yes! » Don't we need to consider the eliminate rate? Why? Elimination is independent from any kind of input. » if we speed up the infusion rate while the eliminate rate is not change, the Cmax won't be greater than the constant rate infusion? Again: yes. In my simulations I assumed: 1. ▬▬ D=1 with a constant infusion rate of 2 h–1 (i.e., infusion time of exactly 30 minutes). 2. ▬▬ D=0.5 with a constant infusion rate of 2 h–1 (i.e., planned infusion time of exactly 30 minutes). Infusion completely stopped after 15 minutes for five minutes. Infusion of the remaining D=0.5 resumed at 20 minutes with an accelerated infusion rate of 3 h–1 (i.e., infusion time of exactly ten minutes). I have some doubts whether with your equipment (drip counter) you managed 1. to have the same infusion rate across all subjects (I guess in some infusion was completed earlier or later than at 30 minutes) and 2. to adjust an accelerated infusion rate properly. ElMaestro had valid points as well. With an accelerated infusion rate you may run into safety problems. Coming back to your very first question: » Does the Cmax should be excluded? What about the AUC? What should the investigator to do if the same situation (need to stop temporarily or need to change the infusion speed) occurs next time? For the next time I strongly recommend infusion pumps (yeah, I know, expensive): Exact infusion rates, no bubbles, no problems. If you would have kept the original infusion rate (after the five minutes stop), i.e., complete it at 35 minutes, you would have seen a Cmax which is 0.35% (!) lower than expected with the planned schedule – if you have an early sampling time point. If not, you will observed a lower Cmax. How much lower depends on the sampling schedule and the elimination rate. Theoretically the AUC is not affected (depends only on the dose). PS: Why did it take the nurse five minutes to expel the bubble? In my experience about one minute should be sufficient. Would look like this: Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8211401700973511, "perplexity": 2875.4321685521463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627997335.70/warc/CC-MAIN-20190615202724-20190615224724-00108.warc.gz"}
http://math.stackexchange.com/questions/87365/a-very-simple-optimisation-problem/87437
# A very simple optimisation problem Given any set of real scalar values $V=\{v_i | 1 \leq i \leq n\}$ and a distinct value $v_p$ define c:- $$c= \sum_{i=1}^n |v_i-v_p|$$ What is the easiest way to determine $v_p$ such that $c$ is minimised? One approach would be an iterative binary search for $v_p$ between $\min(V)$ and $\max(V)$ - can this be improved upon? - The median of the $v_i$ minimizes this function, so you should sort the $v_i$ and pick the middle value, which takes $O(n\log n)$ time. If $n$ is even, pick any value between the middle two elements (it is easy to see that changing $v_p \rightarrow v_p + \delta v_p$ doesn't change $c$ unless $v_p$ crosses one of the $v_i$.) Edit: A binary search would find an optimal value in $O(\log n)$ function evaluations, each of which executes in $O(n)$ time, for a total running time of $O(n\log n)$. If you follow my advice and sort the $v_i$ to find the median, it will take $O(n\log n)$ comparisons on average. However, you can be smarter: using a selection algorithm it is possible to compute the sample median in $O(n)$ time, and this is a lower bound on the complexity. - Plot $n$, say: $9$, arbitrary points $v_i$ on a horizontal $v$-axis and mark in red a trial point $x$ in the midst between these. Now check what happens to the quantity $c(x):=\sum_{i=1}^n |v_i-x|$ when you move $x$ slightly to the left or to the right. In the end you will see where $x$ must lie to make $c(x)$ minimal. (The solution may be non-unique, as, e.g., in the case $n=2$.) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7679024338722229, "perplexity": 141.9365793970358}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663060.18/warc/CC-MAIN-20140930004103-00370-ip-10-234-18-248.ec2.internal.warc.gz"}