qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
719,055
<p>I'm trying to show that if solid tori $T_1, T_2; T_i=S^1 \times D^2$ ,are glued by homeomorphisms between their respective boundaries, then the homeomorphism type of the identification space depends on the choice of homeomorphism up to, I think, isotopy ( Please forgive the rambling; I'm trying to put together a lot of different things from different sources, and I don't yet have a very coherent general picture.) I first thought of Lens spaces, but the gluing here is not done by a homeomorphism.</p> <p>I have some fuzzy ideas here that I would like to make precise: I know this has to see with Heegard splittings; specifically, this is a genus-1 splitting ( actually, genus-1 gluing ) and the gluing may be determined by a mapping in $SL(2,\mathbb Z)$, which determines the induced map on the top homology , and different induced maps would result in different homeomorphic types on the glued spaces.</p> <p>I think we can also see this from the perspective of Dehn surgery ( please feel free to correct anything I write here ), where we remove a link $L$ and a tubular 'hood $T(L)$ of $L$ , and then glue another torus. I know then an n-framing is equivalent to removing a solid torus, twisting n times and then regluing. But it's obvious from the post that I don't know how to show that the homeomorphism class of the space glued along $h: \partial T_1 \rightarrow \partial T_2$ depends on $h$.</p> <p>Thanks, and sorry for the rambling ( not my fault, I was born a rambling man.)</p>
Abhishek Verma
94,307
<p>There is a $C$ which <strong>looks like</strong> $A/B$ (suppose) in some german notation. And now in some rule you denote some value by $C$ and others by $A$ and $B$ and $C = A/B$ is the rule. You won't say you broke the notation $C$ into $A/B$ because it was looking so.</p> <p>You will deal more such cases in integration.</p>
2,912,376
<p>I understood why he chose the positive square root in the sin but why the tan is also positive ? Isn't the tan positive and negative in this interval ? <a href="https://i.stack.imgur.com/2rBFs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2rBFs.png" alt="enter image description here"></a></p>
egreg
62,967
<p>Note that, for $\theta\in[0,\pi]$ and $\theta\ne\pi/2$, $$ \tan\theta=\frac{\sin\theta}{\cos\theta} $$ Since you have already established that $\sin\arccos x=\sqrt{1-x^2}$, you can directly conclude that $$ \tan\arccos x=\frac{\sqrt{1-x^2}}{x} $$ because $\cos\arccos x=x$ by definition.</p> <p>Using $\pm$ is misleading, in my opinion.</p>
697
<p><a href="https://mathoverflow.net/questions/36307/why-cant-i-post-a-question-on-math-stackexchange-com">This question</a> was posted on MO about not being able to post on math.SE. While MO wasn't the right place for the question, I have to wonder what is. New users who are experiencing difficulty using math.SE can't post about it on meta, so where do they turn? The only thing I can think of is that they have to figure out that it is possible for them to contact the moderators, but nowhere is it explicitly described how to do this. Maybe something should be added to the FAQ.</p>
Robert Cartaino
69
<p>The footer of every single page on the site has a <code>contact us</code> link listed in bold.</p>
1,116,022
<p>I've always had this doubt. It's perfectly reasonable to say that, for example, 9 is bigger than 2.</p> <p>But does it ever make sense to compare a real number and a complex/imaginary one?</p> <p>For example, could one say that $5+2i&gt; 3$ because the real part of $5+2i $ is bigger than the real part of $3$? Or is it just a senseless statement?</p> <p>Can it be stated that, say, $20000i$ is bigger than $6$ or does the fact that one is imaginary and the other is natural make it impossible to compare their 'sizes'?</p> <p>It would seem that the 'sizes' of numbers of any type (real, rational, integer, natural, irrational) can be compared, but once imaginary and complex numbers come into the picture, it becomes a bit counter-intuitive for me.</p> <p>So, does it ever make sense to talk about a real number being 'more than' or 'less than' a complex/imaginary one?</p>
msteve
67,412
<p>To compare two complex numbers, we usually look at their modulus: if $z = x+iy$, then the modulus of $z$ is $|z| := \sqrt{x^2 + y^2}$. Regarding $z$ as a point in the complex plane, the modulus of $z$ is the distance to the origin. We can now compare two complex numbers such as $5+2i$ and $3$: notice that $|5+2i| = \sqrt{29}$ and $|3| = 3$, so in this sense, $5+2i$ is `larger' (better to think: farther away from the origin) than $3$.</p>
1,113,415
<p>Is there a website or a book with a calculus theorems list? Or what are the ways remembering calculus theorems list?</p>
Michael Albanese
39,599
<p>An expression is said to be undefined if its meaning or value is not defined. In some cases, an expression is undefined because it is impossible to define it in a consistent or meaningful way; this is the case for the expression $\frac{1}{0}$. No matter how you choose to define the expression $\frac{1}{0}$, it leads to inconsistencies under the usual rules of arithmetic; for that reason, we leave the expression undefined.</p> <p>In the case of the expression $\frac{1}{0} - 0.5$, if we were to define its value, using the rules of arithmetic (in particular, adding $0.5$), one would obtain a value for $\frac{1}{0}$ which we already know leads to inconsistencies. Therefore, the expression $\frac{1}{0} - 0.5$ is undefined for the same reason that the expression $\frac{1}{0}$ is: there is no way to assign it a value which is consistent with the usual rules of arithmetic.</p>
1,860,267
<blockquote> <p>Prove the convergence of</p> <p><span class="math-container">$$\int\limits_1^{\infty} \frac{\cos(x)}{x} \, \mathrm{d}x$$</span></p> </blockquote> <p>First I thought the integral does not converge because</p> <p><span class="math-container">$$\int\limits_1^{\infty} -\frac{1}{x} \,\mathrm{d}x \le \int\limits_1^{\infty} \frac{\cos(x)}{x} \, \mathrm{d}x$$</span></p> <p>But in this case</p> <p><span class="math-container">$$\int\limits_1^{\infty} \frac{\cos(x)}{x} \, \mathrm{d}x \le \int\limits_1^{\infty} \frac{1}{x^2} \, \mathrm{d}x$$</span></p> <p>it converges concerning the majorant criterion. What's the right way?</p>
Olivier Oloa
118,798
<p>You might want to use integration by parts, obtaining for <span class="math-container">$M\ge1$</span>, <span class="math-container">$$ \int_{1}^M \frac{\cos x}{x}\: dx=\left[\frac{\sin x}{ x}\right]_1^M+ \int_1^M \frac{\sin x}{x^2}\: dx $$</span> letting <span class="math-container">$M \to \infty$</span> gives <span class="math-container">$$ \int_{1}^\infty \frac{\cos x}{x} \:dx=\lim_{M \to \infty}\int_1^M \frac{\cos x}{x} \:dx= -\sin 1+\int_1^\infty \frac{\sin x}{ x^2}\: dx $$</span>then one may conclude by the absolute convergence of the latter integral: <span class="math-container">$$ \left|\int_1^\infty \frac{\sin x}{ x^2}\: dx\right|&lt;\int_1^\infty \frac{|\sin x|}{ x^2}\: dx&lt;\int_1^\infty \frac{1}{x^2}\: dx&lt;\infty. $$</span></p>
4,469,583
<p>How can I construct/define arbitrary semi-computable (but not computable) sets?</p> <p>Recall that a set A is semi-computable if it is domain of a computable function f. Recall also that a set A is computable if and only if both A and the complement set (A<sup>c</sup>) are semi-computable.</p> <p>In particular, I am looking for a semi-computable (but not computable) set A, such that A is a proper subset of E, where E is the set of even numbers. E = { 2x | x € N }</p>
SHC MostWanted
905,738
<p>The answer is provided by @Mitchell Spector in comments.</p> <p>Pick any semi-computable, but not computable set <span class="math-container">$K$</span>. Then, a new set can be defined as <span class="math-container">$\{ 2x \mid x \in K\}$</span>.</p> <p>Such a set is a proper subset of <span class="math-container">$E$</span> because it consists only of even numbers, and it cannot be computable (but it is semi-computable), because if it was computable, then <span class="math-container">$K$</span> would be computable as well.</p>
33,582
<p>My code finding <a href="http://en.wikipedia.org/wiki/Narcissistic_number">Narcissistic numbers</a> is not that slow, but it's not in functional style and lacks flexibility: if $n \neq 7$, I have to rewrite my code. Could you give some good advice?</p> <pre><code>nar = Compile[{$}, Do[ With[{ n = 1000000 a + 100000 b + 10000 c + 1000 d + 100 e + 10 f + g, n2 = a^7 + b^7 + c^7 + d^7 + e^7 + f^7 + g^7}, If[n == n2, Sow@n]; ], {a, 9}, {b, 0, 9}, {c, 0, 9}, {d, 0, 9}, {e, 0, 9}, {f, 0, 9}, {g, 0, 9}], RuntimeOptions -&gt; "Speed", CompilationTarget -&gt; "C" ]; Reap[nar@0][[2, 1]] // AbsoluteTiming (*{0.398023, {1741725, 4210818, 9800817, 9926315}}*) </code></pre>
alephalpha
6,652
<pre><code>nar[m_] := ToExpression[ "Compile[{$},Do[With[{n=0" &lt;&gt; StringJoin[ Table["+1" &lt;&gt; Array["0" &amp;, m - 1 - i, 1, StringJoin] &lt;&gt; "a" &lt;&gt; ToString[m - 1 - i], {i, 0, m - 1}]] &lt;&gt; ",n2=0" &lt;&gt; Table["+a" &lt;&gt; ToString[m - 1 - i] &lt;&gt; "^" &lt;&gt; ToString[m], {i, 0, m - 1}] &lt;&gt; "},If[n\[Equal]n2,Sow@n];];,{a0" &lt;&gt; StringJoin[Table[",0,9},{a" &lt;&gt; ToString[i], {i, 1, m - 1}]] &lt;&gt; ",9}],RuntimeOptions\[Rule]\"Speed\",CompilationTarget\[Rule]\"C\"\ ]"]; Reap[nar[7][0]][[2, 1]] // AbsoluteTiming (*{1.184733, {9926315, 1741725, 9800817, 4210818}}*) </code></pre> <p>My computer is rather slow. @RunnyKine's code takes 0.901549 seconds on my computer. </p>
33,582
<p>My code finding <a href="http://en.wikipedia.org/wiki/Narcissistic_number">Narcissistic numbers</a> is not that slow, but it's not in functional style and lacks flexibility: if $n \neq 7$, I have to rewrite my code. Could you give some good advice?</p> <pre><code>nar = Compile[{$}, Do[ With[{ n = 1000000 a + 100000 b + 10000 c + 1000 d + 100 e + 10 f + g, n2 = a^7 + b^7 + c^7 + d^7 + e^7 + f^7 + g^7}, If[n == n2, Sow@n]; ], {a, 9}, {b, 0, 9}, {c, 0, 9}, {d, 0, 9}, {e, 0, 9}, {f, 0, 9}, {g, 0, 9}], RuntimeOptions -&gt; "Speed", CompilationTarget -&gt; "C" ]; Reap[nar@0][[2, 1]] // AbsoluteTiming (*{0.398023, {1741725, 4210818, 9800817, 9926315}}*) </code></pre>
ubpdqn
1,997
<p>This may not be efficient but it is terse:</p> <pre><code>narc[n_] := Module[{r, l, t}, r = Range[n]; t = Total@(#^Length[#]) &amp; /@ (IntegerDigits /@ r); Pick[r, r - t, 0] ] </code></pre> <p><code>narc[10000000]</code> yields:</p> <pre><code>{1, 2, 3, 4, 5, 6, 7, 8, 9, 153, 370, 371, 407, 1634, 8208, 9474, \ 54748, 92727, 93084, 548834, 1741725, 4210818, 9800817, 9926315} </code></pre>
70,728
<p>I've started taking an <a href="http://www.ml-class.org/" rel="noreferrer">online machine learning class</a>, and the first learning algorithm that we are going to be using is a form of linear regression using gradient descent. I don't have much of a background in high level math, but here is what I understand so far.</p> <p>Given <span class="math-container">$m$</span> number of items in our learning set, with <span class="math-container">$x$</span> and <span class="math-container">$y$</span> values, we must find the best fit line <span class="math-container">$h_\theta(x) = \theta_0+\theta_1x$</span> . The cost function for any guess of <span class="math-container">$\theta_0,\theta_1$</span> can be computed as:</p> <p><span class="math-container">$$J(\theta_0,\theta_1) = \frac{1}{2m}\sum_{i=1}^m(h_\theta(x^{(i)}) - y^{(i)})^2$$</span></p> <p>where <span class="math-container">$x^{(i)}$</span> and <span class="math-container">$y^{(i)}$</span> are the <span class="math-container">$x$</span> and <span class="math-container">$y$</span> values for the <span class="math-container">$i^{th}$</span> component in the learning set. If we substitute for <span class="math-container">$h_\theta(x)$</span>,</p> <p><span class="math-container">$$J(\theta_0,\theta_1) = \frac{1}{2m}\sum_{i=1}^m(\theta_0 + \theta_1x^{(i)} - y^{(i)})^2$$</span></p> <p>Then, the goal of gradient descent can be expressed as</p> <p><span class="math-container">$$\min_{\theta_0, \theta_1}\;J(\theta_0, \theta_1)$$</span></p> <p>Finally, each step in the gradient descent can be described as:</p> <p><span class="math-container">$$\theta_j := \theta_j - \alpha\frac{\partial}{\partial\theta_j} J(\theta_0,\theta_1)$$</span></p> <p>for <span class="math-container">$j = 0$</span> and <span class="math-container">$j = 1$</span> with <span class="math-container">$\alpha$</span> being a constant representing the rate of step. </p> <p>I have no idea how to do the partial derivative. I have never taken calculus, but conceptually I understand what a derivative represents. The instructor gives us the partial derivatives for both <span class="math-container">$\theta_0$</span> and <span class="math-container">$\theta_1$</span> and says not to worry if we don't know how it was derived. (I suppose, technically, it is a computer class, not a mathematics class) However, I would very much like to understand this if possible. Could someone show how the partial derivative could be taken, or link to some resource that I could use to learn more? I apologize if I haven't used the correct terminology in my question; I'm very new to this subject.</p>
Roy Alilin
535,881
<p>The idea behind partial derivatives is finding the slope of the function with regards to a variable while other variables value remains constant (does not change). Or what's the slope of the function in the coordinate of a variable of the function while other variable values remains constant. As what I understood from MathIsFun, there are 2 rules for finding partial derivatives:</p> <p>1.) Terms (number/s, variable/s, or both, that are multiplied or divided) that do not have the variable whose partial derivative we want to find becomes 0</p> <p>example:<br> f(z,x,y) = z<sup>2</sup> + x<sup>2</sup>y<br> f'z = 2z + 0</p> <p>2.) For terms which contains the variable whose partial derivative we want to find, other variable/s and number/s remains the same, and compute for the derivative of the variable whose derivative we want to find</p> <p>example:<br> f(z,x,y,m) = z<sup>2</sup> + (x<sup>2</sup>y<sup>3</sup>)/m<br> f'x = 0 + 2xy<sup>3</sup>/m</p> <p>For linear regression, for each cost value, you can have 1 or more input. For example for finding the "cost of a property" (this is the cost), the first input X<sub>1</sub> could be size of the property, the second input X<sub>2</sub> could be the age of the property. We need to understand the guess function. For linear regression, guess function forms a line(maybe straight or curved), whose points are the guess cost for any given value of each inputs (X<sub>1</sub>, X<sub>2</sub>, X<sub>3</sub>, ...). </p> <p>For single input (graph is 2-coordinate where the y-axis is for the cost values while the x-axis is for the input X<sub>1</sub> values), the guess function is:</p> <p>H<sub>θ</sub> = θ<sub>0</sub> + θ<sub>1</sub>X<sub>1</sub></p> <p>For 2 input (graph is 3-d, 3-coordinate, where the vertical axis is for the cost values, while the 2 horizontal axis which are perpendicular to each other are for each input (X<sub>1</sub> and X<sub>2</sub>). The 3 axis are joined together at each zero value:</p> <p>H<sub>θ</sub> = θ<sub>0</sub> + θ<sub>1</sub>X<sub>1</sub> + θ<sub>2</sub>X<sub>2</sub></p> <p>For 3 input (not easy to form a graph):</p> <p>H<sub>θ</sub> = θ<sub>0</sub> + θ<sub>1</sub>X<sub>1</sub> + θ<sub>2</sub>X<sub>2</sub> + θ<sub>3</sub>X<sub>3</sub></p> <p>Note θ are variables and represents the weights. θ<sub>0</sub> represents the weight when all input values are zero. θ<sub>0</sub> is base cost value, you can not form a good line guess if the cost always start at 0. You can actually multiply θ<sub>0</sub> to an imaginary input X<sub>0</sub>, and this X<sub>0</sub> input has a constant value of 1.</p> <p>To get the partial derivative the cost function for 2 inputs, with respect to θ<sub>0</sub>, θ<sub>1</sub>, and θ<sub>2</sub>, the cost function is:</p> <p><span class="math-container">$$ J = \frac{\sum_{i=1}^M ((\theta_0 + \theta_1X_1i + \theta_2X_2i) - Y_i)^2}{2M}$$</span></p> <p>Where M is the number of sample cost data, X<sub>1i</sub> is the value of the first input for each sample cost data, X<sub>2i</sub> is the value of the second input for each sample cost data, and Y<sub>i</sub> is the cost value of each sample cost data</p> <p>To compute for the partial derivative of the cost function with respect to θ<sub>0</sub>, the whole cost function is treated as a single term, so the denominator 2M remains the same. We will find the partial derivative of the numerator with respect to θ<sub>0</sub>, θ<sub>1</sub>, θ<sub>2</sub>. Using the combination of the rule in finding the derivative of a summation, chain rule, and power rule:</p> <p><span class="math-container">$$ f(x) = \sum_{i=1}^M (X)^n$$</span> <span class="math-container">$$ f'_x = n . \sum_{i=1}^M (X)^(n-1) . f'X $$</span></p> <p><span class="math-container">$$ So f'θ_0 = \frac{2 . \sum_{i=1}^M ((\theta_0 + \theta_1X_1i + \theta_2X_2i) - Y_i)^1 . f'θ_0 ((\theta_0 + \theta_1X_1i + \theta_2X_2i) - Y_i)}{2M}$$</span></p> <p><span class="math-container">$$ f'θ_0 = \frac{2 . \sum_{i=1}^M ((\theta_0 + \theta_1X_1i + \theta_2X_2i) - Y_i) . f'θ_0 ((\theta_0 + 0 + 0) - 0)}{2M}$$</span></p> <p><span class="math-container">$$ f'θ_0 = \frac{2 . \sum_{i=1}^M ((\theta_0 + \theta_1X_1i + \theta_2X_2i) - Y_i) . f'θ_0 (\theta_0)}{2M}$$</span></p> <p><span class="math-container">$$ f'θ_0 = \frac{2 . \sum_{i=1}^M ((\theta_0 + \theta_1X_1i + \theta_2X_2i) - Y_i) . 1}{2M}$$</span></p> <p><span class="math-container">$$ temp_0 = \frac{\sum_{i=1}^M ((\theta_0 + \theta_1X_1i + \theta_2X_2i) - Y_i)}{M}$$</span></p> <p><span class="math-container">$$ f'θ_1 = \frac{2 . \sum_{i=1}^M ((\theta_0 + \theta_1X_1i + \theta_2X_2i) - Y_i)^1 . f'θ_1 ((\theta_0 + \theta_1X_1i + \theta_2X_2i) - Y_i)}{2M}$$</span></p> <p><span class="math-container">$$ f'θ_1 = \frac{2 . \sum_{i=1}^M ((\theta_0 + \theta_1X_1i + \theta_2X_2i) - Y_i) . f'θ_1 ((0 + X_1i\theta_1 + 0) - 0)}{2M}$$</span></p> <p><span class="math-container">$$ f'θ_1 = \frac{2 . \sum_{i=1}^M ((\theta_0 + \theta_1X_1i + \theta_2X_2i) - Y_i) . f'θ_1 (X_1i\theta_1)}{2M}$$</span></p> <p><span class="math-container">$$ f'θ_1 = \frac{2 . \sum_{i=1}^M ((\theta_0 + \theta_1X_1i + \theta_2X_2i) - Y_i) . X_1i}{2M}$$</span></p> <p><span class="math-container">$$ temp_1 = \frac{\sum_{i=1}^M ((\theta_0 + \theta_1X_1i + \theta_2X_2i) - Y_i) . X_1i}{M}$$</span></p> <p><span class="math-container">$$ f'θ_2 = \frac{2 . \sum_{i=1}^M ((\theta_0 + \theta_1X_1i + \theta_2X_2i) - Y_i)^1 . f'θ_1 ((\theta_0 + \theta_1X_1i + \theta_2X_2i) - Y_i)}{2M}$$</span></p> <p><span class="math-container">$$ f'θ_2 = \frac{2 . \sum_{i=1}^M ((\theta_0 + \theta_1X_1i + \theta_2X_2i) - Y_i) . f'θ_1 ((0 + 0 + X_2i\theta_2) - 0)}{2M}$$</span></p> <p><span class="math-container">$$ f'θ_2 = \frac{2 . \sum_{i=1}^M ((\theta_0 + \theta_1X_1i + \theta_2X_2i) - Y_i) . f'θ_1 (X_2i\theta_2)}{2M}$$</span></p> <p><span class="math-container">$$ f'θ_2 = \frac{2 . \sum_{i=1}^M ((\theta_0 + \theta_1X_1i + \theta_2X_2i) - Y_i) . X_2i}{2M}$$</span></p> <p><span class="math-container">$$ temp_2 = \frac{\sum_{i=1}^M ((\theta_0 + \theta_1X_1i + \theta_2X_2i) - Y_i) . X_2i}{M}$$</span></p> <p>Gradient Descent is</p> <p>repeat until minimum result of the cost function {</p> <p>// Calculation of temp0, temp1, temp2 placed here (partial derivatives for θ<sub>0</sub>, θ<sub>1</sub>, θ<sub>1</sub> found above) <span class="math-container">$$ \theta_0 = \theta_0 - \alpha . temp0 $$</span> <span class="math-container">$$ \theta_1 = \theta_1 - \alpha . temp1 $$</span> <span class="math-container">$$ \theta_2 = \theta_2 - \alpha . temp2 $$</span> }</p> <p>Where α is the learning rate</p>
2,702,060
<p>A little confusion on my part. Study of multi variable calculus and we are using the formula for length of a parameterized curve. The equation makes intuitive sense and I can work it OK. But I also recall using the same integral with out the parameterizing to find the length of a curve where the first term of the square root in just one. The former formula is the general case.</p> <p>Now for the question: I had just previously used the integral for completing the quadrature i.e. Find the area under a curve. Is the single integral used for finding both area and length ? I guess I am trying to unify the concepts in my mind to understand the context of how they are used and know the difference. Thank you.</p>
Sri-Amirthan Theivendran
302,692
<p>A combinatorial proof. Consider the two element subsets of $\Omega=\{0,1,\dotsc,n\}$. There are $\binom{n+1}{2}$ of them (corresponding to the right hand side of the equality). But we can count in another way. Classify the two element subsets based on their maximum element. For $1\leq k \leq n$, there are $\binom{k}{1}=k$ two element subsets whose maximum element is $k$ since there are $k$ non-negative integers less than $k$.</p> <p>Another proof based on telescoping. Let $n^{\underline{2}}=n(n-1)$ (this is the falling factorial of length two. The exponent has an underline for notation). Observe that $$ \frac{1}{2}(n+1)^{\underline{2}}-\frac{1}{2}(n)^{\underline{2}}=n. $$ In particular $$ \sum_{k=1}^nk=\frac{1}{2}\sum_{k=1}^n (k+1)^{\underline{2}}-(k)^{\underline{2}}=\frac{(n+1)^{\underline{2}}}{2}=\frac{n(n+1)}{2}. $$</p>
2,702,060
<p>A little confusion on my part. Study of multi variable calculus and we are using the formula for length of a parameterized curve. The equation makes intuitive sense and I can work it OK. But I also recall using the same integral with out the parameterizing to find the length of a curve where the first term of the square root in just one. The former formula is the general case.</p> <p>Now for the question: I had just previously used the integral for completing the quadrature i.e. Find the area under a curve. Is the single integral used for finding both area and length ? I guess I am trying to unify the concepts in my mind to understand the context of how they are used and know the difference. Thank you.</p>
fleablood
280,126
<p>Method 1: (requires you to consider whether $n$ is odd or even.)</p> <p>$S = 1 + 2 + ...... + n$.</p> <p>Join up the first to term to the last term and second to second to last and so on.</p> <p>$S = \underbrace{1 + \underbrace{2 + \underbrace{3 +....+(n-2)} + (n-1)} + n}$.</p> <p>$= (n+1) + (n+1) + .....$.</p> <p>If $n$ is even then:</p> <p>$S = \underbrace{1 + \underbrace{2 + \underbrace{3 +..+\underbrace{\frac n2 + (\frac n2 + 1)}+..+(n-2)} + (n-1)} + n}$</p> <p>And you have $\frac n2$ pairs that add up to $n+1$. So the sum is $S= \frac n2(n+1)$.</p> <p>If $n$ is odd then:</p> <p>$S = \underbrace{1 + \underbrace{2 + \underbrace{3 +..+\underbrace{\frac {n-1}2 + [\frac {n+1}2] + (\frac {n+1}2 + 1)}+..+(n-2)} + (n-1)} + n}$</p> <p>And you have $\frac {n-1}2$ pairs that also add up to $n+1$ and one extra number $\frac {n+1}2$ which didn't fit into any pair. So the sum is $\frac {n-1}2(n+1) + \frac {n+1}2 =(n-1)\frac {n+1}2 + \frac {n+1}2 = (n-1 + 1)\frac {n+1}2n=n\frac {n+1}2$.</p> <p>Method 1$\frac 12$ (Same as above but waves hands over doing tso cases).</p> <p>$S = average*\text{number of terms} = average*n$.</p> <p>Now the average of $1$ and $n$ is $\frac {n+1}2$ and the average of $2$ and $n-1$ is $\frac {n+1}2$ and so on. So the average of all of them together is $\frac {n+1}2$. So $S = \frac {n+1}2n$.</p> <p>Method 2: (doesn't require considering whether $n$ is odd or even).</p> <p>$S = 1 + 2 + 3 + ...... + n$</p> <p>$S = n + (n-1) + (n-2) + ...... + 1$.</p> <p>$2S = S+S = (n+ 1) + (n+1) + ..... + (n+1) = n(n+1)$></p> <p>$S = \frac {n(n+1)}2$.</p> <p>Note that by adding $S$ to itself this doesn't matter whether $n$ is even or odd.</p> <p>And lest you are wondering why can we be so sure that $n(n+1)$ <em>must</em> be even (we constructed it so it must be true... but why?) we simply note that one of $n$ or $n+1$ must be even. </p> <p>So no problem.</p>
3,680,864
<p>I'm trying to understand the relation between the following conditions. I will assume that <span class="math-container">$X$</span> is a Hausdorff topological space and <span class="math-container">$A \subset X$</span>.</p> <ol> <li><span class="math-container">$\overline{A}$</span> is compact;</li> <li>Every net <span class="math-container">$\{x_{\lambda}\}_{\lambda \in \mathbb{L}} \subset A$</span> has a subnet converging to some point;</li> </ol> <p>It is clear to me that <span class="math-container">$1 \Rightarrow 2$</span>. I read that <span class="math-container">$2 \Rightarrow 1$</span> if <span class="math-container">$X$</span> is regular, but I am not able to find a proof. I would like to have a proof and, if possible, an explicit example in which the implication <span class="math-container">$2 \Rightarrow 1$</span> is false.</p>
Henno Brandsma
4,280
<p>IIRC the proof is along these lines: </p> <p>If we have a net <span class="math-container">$x_i, i \in I$</span> that is defined on <span class="math-container">$\overline{A}$</span>, we need to proof it has a convergent subnet (or cluster point) in <span class="math-container">$\overline{A}$</span>. For each <span class="math-container">$i \in I$</span> we find some net <span class="math-container">$a_j, j \in N_i$</span> on <span class="math-container">$A$</span> that converges to <span class="math-container">$x_i$</span> (as <span class="math-container">$x_i \in \overline{A}$</span> this is possible). Then using a Kelly-like diagonal construction we combine these nets in a "super-net" on <span class="math-container">$A$</span> and then using the given we have some cluster point on <span class="math-container">$p \in \overline{A}$</span> of this super-net, and using closed neighbourhoods of <span class="math-container">$p$</span> we can find a subnet of the original net <span class="math-container">$(x_i)_{i \in I}$</span> that converges to <span class="math-container">$p$</span> too. </p>
9,111
<p>What function can I use to evaluate $(x+y)^2$ to $x^2 + 2xy + y^2$? </p> <p>I want to evaluate It and I've tried to use the most obvious way: simply typing and evaluating $(x+y)^2$, But it gives me only $(x+y)^2$ as output. I've been searching for it in the last minutes but I still got no clue, can you help me?</p>
Vitaliy Kaurov
13
<p>Short answer is</p> <pre><code> Expand[(x + y)^2] </code></pre> <blockquote> <p>x^2 + 2 x y+ y^2</p> </blockquote> <p>But I recommend you to look at the following tutorials.</p> <ul> <li><p><a href="http://reference.wolfram.com/mathematica/tutorial/TransformingAlgebraicExpressions.html" rel="noreferrer">Transforming Algebraic Expressions</a></p></li> <li><p><a href="http://reference.wolfram.com/mathematica/tutorial/PuttingExpressionsIntoDifferentForms.html" rel="noreferrer">Putting Expressions into Different Forms</a></p></li> </ul> <p>And of course a super tutorial:</p> <ul> <li><a href="http://reference.wolfram.com/mathematica/tutorial/AlgebraicManipulationOverview.html" rel="noreferrer">Algebraic Manipulation</a></li> </ul> <p>Also this palette maybe really useful: <strong>Top Menu >> Palettes >> Other >> Algebraic Manipulation</strong> </p> <p><img src="https://i.stack.imgur.com/OXFit.png" alt="enter image description here"></p>
2,148,861
<p>In one of my junior classes, my Mathematics teacher, while teaching Mensuration, told us that <strong>metres square</strong> and <strong>square metres</strong> have a difference between them and <strong>metres cube</strong> and <strong>cubic metres</strong> too have a difference between them and that we should not mix them up. When i asked her the reason behind them being different, she told me that she would discuss about that later on but she forgot and i too forgot to remind her. Now i remember about all this. Why are they different and what is the difference between them. I have searched the internet but could not find anything valuable. </p>
Khalid parvaz
1,047,841
<p>&quot;Square metre&quot; means area, while &quot;metre square&quot; means a square having all sides 1 metre.</p> <p>So, &quot;8 square metres&quot; is an area, while an &quot;8-metre square&quot; means a square of side 8 metres (and area 64 square metres).</p>
179,223
<p>I have posted the same question on the community (<a href="http://community.wolfram.com/groups/-/m/t/1394441?p_p_auth=YV2a4wzw" rel="nofollow noreferrer">http://community.wolfram.com/groups/-/m/t/1394441?p_p_auth=YV2a4wzw</a>).</p> <p>I tried to register the movie posted below (compressed version here) using Mathematica but to no use. However I could manage to do the same very easily with another software (FIJI: <a href="https://fiji.sc/" rel="nofollow noreferrer">https://fiji.sc/</a> ) with a plugin "StackReg" (<a href="http://bradbusse.net/sciencedownloads.html" rel="nofollow noreferrer">http://bradbusse.net/sciencedownloads.html</a>) </p> <p>Input Video:</p> <p><a href="http://community.wolfram.com//c/portal/getImageAttachment?filename=ezgif.com-optimize.gif&amp;userId=942204" rel="nofollow noreferrer">http://community.wolfram.com//c/portal/getImageAttachment?filename=ezgif.com-optimize.gif&amp;userId=942204</a></p> <p>The strategy that I used for the registration was as follows with both softwares:</p> <ol> <li><p>Inverting (ColorNegate) the image</p></li> <li><p>Applying a Gaussian Blur of radius 10</p></li> <li><p>Thresholding the image to obtain a mask for the object</p></li> <li><p>Registering the binarized image (mask) and saving the transformation matrices</p></li> <li><p>Using the transformation matrices to obtain a registered version of the image.</p></li> </ol> <p>for results obtained from FIJI/StackReg please see: <a href="http://community.wolfram.com//c/portal/getImageAttachment?filename=brightfield.gif&amp;userId=942204" rel="nofollow noreferrer">http://community.wolfram.com//c/portal/getImageAttachment?filename=brightfield.gif&amp;userId=942204</a></p> <p>The code for Mathematica breaks when I do the same:</p> <p><a href="https://i.stack.imgur.com/HhnEf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HhnEf.png" alt="enter image description here"></a></p> <p>Can anyone please help me figure out why ImageAlign is breaking down?</p> <p>This is a simple problem and I expect that the ImageAlign function should not break down on such a petty case. I checked my masks and they seem to be fine. I gave the masks that I generated from Mathematica to FIJI/StackReg which can successfully align them and yield the transformation matrices in a .txt file. This brings me to the second question, is there a way to get the transformation matrix (alignment matrix) from ImageAlign. Because I need the transformation matrices after alignment in order to align the original movie.</p> <p>Note: StackReg aligns all the images relative to the first frame.</p>
b3m2a1
38,205
<p>I know nothing about <code>ImageAlign</code> but thought it'd be fun to imitate what StackReg did in the video in your link.</p> <p>Here's strategy based on the fact that the central blob will remain approximately circular throughout. Note that I use a bizzarro way to get the initial blob form--this isn't really necessary but is instead a byproduct of thinking that was where a bug in the code was (it wasn't).</p> <p>The idea here is that we'll replace the blob with a rectangle, compute the rectangle corner points, then recenter and rotate based on this.</p> <p>So first the image prep code so that we can find the true blob:</p> <pre><code>rawimg = Import["http://community.wolfram.com//c/portal/getImageAttachment?filename=ezgif.com-optimize.gif&amp;userId=942204"]; gray = ColorConvert[#, "Grayscale"] &amp; /@ rawimg; boxImgPreprocess1[img_] := DeleteSmallComponents[ Binarize@ Dilation[ImageClip[GradientFilter[img, .05], {.5, .8}, {1, 1}], 2], 20 ]; boxImgPreprocess2[img_] := DeleteSmallComponents@ Closing[ ColorNegate@img, 25 ] </code></pre> <p>We can see here what that does:</p> <pre><code>Manipulate[ .5*boxImgPreprocess2@boxImgPreprocess1[gray[[i]]] + gray[[i]] // ImageResize[#, 350] &amp;, {i, 1, Length@gray, 1} ] </code></pre> <p><a href="https://i.stack.imgur.com/J5GTq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/J5GTq.png" alt="enter image description here"></a></p> <p>Then I turn this into an array like so:</p> <pre><code>boxMeUp[img_] := ImageData@boxImgPreprocess2@boxImgPreprocess1[img] </code></pre> <p>Then we compute the corner points of the bounding box of this. I had two ideas for how to do it, either the loopy way or the Mathematica way:</p> <pre><code>boxCorners1[boxArray_?MatrixQ] := Module[ { boxDim = Dimensions[boxArray], xrange, xpos, xmin = None, xmax = None, ymin = None, ymax = None }, xrange = Range[boxDim[[2]]]; xpos = {0, boxDim[[1]] + 1}; Do[ xpos = Pick[xrange, boxArray[[i]], 1]; If[Length@xpos &gt; 0, If[xmin =!= None, {xmin, xmax} = MinMax[{{xmin, xmax}, MinMax[xpos]}], {xmin, xmax} = MinMax[xpos] ]; If[ymin == None, ymin = i], If[ymin =!= None, ymax = i - 1; Break[]] ], {i, Length@boxArray} ]; {{xmin, boxDim[[2]] - ymin}, {xmax, boxDim[[2]] - ymax}} ] boxCorners2[boxArray_?MatrixQ] := CoordinateBounds@Position[boxArray, 1] </code></pre> <p>The former is about an order of magnitude faster than the latter, but they're both pretty fast:</p> <p>b1 = boxMeUp[gray[<a href="https://i.stack.imgur.com/qJZjE.png" rel="nofollow noreferrer">2</a>]]; b2 = boxMeUp[gray[[11]]];</p> <pre><code>boxCorners1 /@ {b1, b2} // RepeatedTiming {0.0041, {{{166, 339}, {295, 210}}, {{152, 324}, {282, 194}}}} boxCorners2 /@ {b1, b2} // RepeatedTiming {0.040, {{{173, 302}, {166, 295}}, {{188, 318}, {152, 282}}}} </code></pre> <p>Note that <code>boxCorners2</code> would require some tweaking to work quite right</p> <p>Now we'll check how this worked:</p> <p>HighlightImage[gray[[125]], Mean /@ Transpose@boxCorners1@boxMeUp@gray[[125]]] // ImageResize[#, 350] &amp;</p> <p><a href="https://i.stack.imgur.com/qJZjE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qJZjE.png" alt="enter image description here"></a></p> <pre><code>HighlightImage[gray[[125]], boxCorners1@boxMeUp@gray[[125]]] // ImageResize[#, 350] &amp; </code></pre> <p><a href="https://i.stack.imgur.com/N9jTd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/N9jTd.png" alt="enter image description here"></a></p> <p>And it appears to have done fine</p> <p>Now we turn these corner points into a rotation angle and translation vector suitable for <code>ImageForwardTransformation</code>:</p> <pre><code>boxAlign[c1_, c2_] := Module[ { midPoints, areas, translation, angle }, midPoints = Map[Mean]@*Transpose /@ {c1, c2}; areas = Times @@ Subtract @@ Reverse@# &amp; /@ {c2, c1}; translation = Subtract @@ midPoints; angle = ArcSin[-1. + Max@{Divide @@ areas, 1}]; {angle, translation, midPoints[[1]]} ] boxTransform[{angle_, translation_, center_}, dim_] := Evaluate[ Composition[ RotationTransform[angle, center/dim], TranslationTransform[translation/dim] ]@{#1, #2} ] &amp; /. {# :&gt; #[[1]], #2 :&gt; #[[2]]} </code></pre> <p>The trick here is that I assumed the blob would be of constant area, so that if it rotates by <code>θ</code> the ration of the areas will be transformed like <code>1 + Sin[2 θ]</code> or so. This actually turns out not to matter all that much, but it's nice for matching the video you had. It also only holds like 50% of the time, as usually the blob changes area just via some kind of breathing motion. Would work better for a rigid body.</p> <p>Finally we stitch this all together into a <code>boxImageAlign</code> function and compute some data for the main reference image:</p> <pre><code>b1 = boxMeUp[gray[[1]]]; imDim = ImageDimensions[gray[[1]]]; bc1 = boxCorners1@b1; boxImageAlign[n_Integer?(0 &lt;= # &lt;= Length@gray &amp;)] := ImageForwardTransformation[ gray[[n]], boxTransform[boxAlign[bc1, boxCorners1@boxMeUp@gray[[n]]], imDim] ]; boxImageAlign[n_Integer?(0 &lt;= # &lt;= Length@gray &amp;), m_Integer?(0 &lt;= # &lt;= Length@gray &amp;)] := ImageForwardTransformation[ gray[[n]], boxTransform[ boxAlign[boxCorners1@boxMeUp@gray[[m]], boxCorners1@boxMeUp@gray[[n]]], imDim] ]; boxImageCheckAlignment[n_] := With[{b1 = boxImageAlign[n], m = 1}, If[ImageQ@b1, { { ImageResize[gray[[m]]*gray[[n]], Scaled[.5]], ImageResize[gray[[m]] + .5*Image@boxMeUp@gray[[n]], Scaled[.5]] }, { ImageResize[gray[[m]]*b1, Scaled[.5]], ImageResize[gray[[m]] + .5*Image@boxMeUp@b1, Scaled[.5]] } } // ImageAssemble, $Failed ] ]; boxImageCheckAlignment[n_, m_] := With[{b1 = boxImageAlign[n, m]}, If[ImageQ@b1, { { ImageResize[gray[[m]]*gray[[n]], Scaled[.5]], ImageResize[gray[[m]] + .5*gray[[n]], Scaled[.5]] }, { ImageResize[gray[[m]]*b1, Scaled[.5]], ImageResize[gray[[m]] + .5*Image@boxMeUp@b1, Scaled[.5]] } } // ImageAssemble, $Failed ] ] </code></pre> <p>Then we check one of the images with maximum misalignment:</p> <pre><code>boxImageCheckAlignment[75] </code></pre> <p><a href="https://i.stack.imgur.com/SjECr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SjECr.png" alt="enter image description here"></a></p> <p>And the blobs match much better</p> <p>If you turn off the rotation it honestly barely changes anything...</p> <p>Finally here's an animation (not the smoothest, but turning off rotation might help):</p> <p><a href="https://www.wolframcloud.com/objects/mathematicase/answers/blob_gif.gif" rel="nofollow noreferrer"><img src="https://www.wolframcloud.com/objects/mathematicase/answers/blob_gif.gif" alt="animation"></a></p>
2,336,988
<blockquote> <p>Let $a,b,c&gt;0 ,2b+2c-a\ge 0,2c+2a-b\ge 0,2a+2b-c\ge 0$ show that $$\sqrt{\dfrac{2b+2c}{a}-1}+\sqrt{\dfrac{2c+2a}{b}-1}+\sqrt{\dfrac{2a+2b}{c}-1}\ge 3\sqrt{3}$$</p> </blockquote> <p>I try use AM-GM and Cauchy-Schwarz inequality and from here I don't see what to do</p>
Michael Rozenberg
190,319
<p>Since our inequality is homogeneous, we can assume that $a+b+c=3$.</p> <p>Hence, $2b+2c-a=2(3-a)-a=3(2-a)\geq0$, which gives $\{a,b,c\}\subset(0,2]$.</p> <p>Thus, we need to prove that $$\sum_{cyc}\sqrt{\frac{2(b+c)}{a}-1}\geq3\sqrt3$$ or $$\sum_{cyc}\sqrt{\frac{2(3-a)}{a}-1}\geq3\sqrt3$$ or $$\sum_{cyc}\sqrt{\frac{2}{a}-1}\geq3$$ or $$\sum_{cyc}\left(\sqrt{\frac{2}{a}-1}-1\right)\geq0$$ or $$\sum_{cyc}\frac{1-a}{\sqrt{a}(\sqrt{2-a}+\sqrt{a})}\geq0$$ or $$\sum_{cyc}\left(\frac{1-a}{\sqrt{a}(\sqrt{2-a}+\sqrt{a})}+\frac{a-1}{2}\right)\geq0$$ or $$\sum_{cyc}\frac{(a-1)(\sqrt{a(2-a)}+a-2)}{\sqrt{a}(\sqrt{2-a}+\sqrt{a})}\geq0$$ or</p> <p>$$\sum_{cyc}\frac{(a-1)\sqrt{2-a}(\sqrt{a}-1)}{\sqrt{a}(\sqrt{2-a}+\sqrt{a})}\geq0$$ or $$\sum_{cyc}\frac{(a-1)^2\sqrt{2-a}}{\sqrt{a}(\sqrt{2-a}+\sqrt{a})(\sqrt{a}+1)}\geq0.$$ Done!</p>
2,379,405
<blockquote> <p>Determine convergence or divergence of $$ \int_0^{\infty} \frac{1 + \cos^2x}{\sqrt{1+x^2}} dx$$</p> </blockquote> <p>As the graph of the function suggests convergence, Let's find an upper bound that converges.</p> <p>$$ \int_0^{\infty} \frac{1 + \cos^2x}{\sqrt{1+x^2}} dx \leq \int_0^{\infty} \frac{2}{\sqrt{1+x^2}} dx \leq \int_0^{\infty} \frac{2}{x} dx$$</p> <p>Given that $\int_0^{\infty} \frac{2}{x} dx $ diverges, i wont be able to show that the original converges.</p> <p>What is my error? What upper bound could I choose?</p>
Simply Beautiful Art
272,831
<p>Showing that there is an upper bound that diverges to $+\infty$ doesn't actually prove anything. What we want is a lower bound:</p> <p>$$x&gt;1\implies\frac{1+\cos^2x}{\sqrt{1+x^2}}\ge\frac1{\sqrt{x^2+x^2}}=\frac{2^{-1/2}}x$$</p> <p>And it seems you already know that $\int_0^\infty\frac1x~\mathrm dx$ diverges.</p>
2,379,405
<blockquote> <p>Determine convergence or divergence of $$ \int_0^{\infty} \frac{1 + \cos^2x}{\sqrt{1+x^2}} dx$$</p> </blockquote> <p>As the graph of the function suggests convergence, Let's find an upper bound that converges.</p> <p>$$ \int_0^{\infty} \frac{1 + \cos^2x}{\sqrt{1+x^2}} dx \leq \int_0^{\infty} \frac{2}{\sqrt{1+x^2}} dx \leq \int_0^{\infty} \frac{2}{x} dx$$</p> <p>Given that $\int_0^{\infty} \frac{2}{x} dx $ diverges, i wont be able to show that the original converges.</p> <p>What is my error? What upper bound could I choose?</p>
hamam_Abdallah
369,188
<p><strong>hint</strong> $$1+\cos^2 (x)\ge 1$$</p> <p>$$\sqrt {1+x^2}\sim x \;(x\to +\infty) $$</p>
2,146,911
<blockquote> <p>Given natural numbers <span class="math-container">$m,n,$</span> and a real number <span class="math-container">$a&gt;1$</span>, prove the inequality :</p> <p><span class="math-container">$$\displaystyle a^{\frac{2n}{m}} - 1 \geq n\big(a^{\frac{n+1}m} - a^{\frac{n-1}{m}}\big)$$</span></p> <p><strong>SOURCE :</strong> <a href="http://imomath.com/pcpdf/f1/f40.pdf" rel="nofollow noreferrer">Inequalities</a> (PDF) (Page Number 2 ; Question Number 153.2)</p> </blockquote> <p>I have been trying this problem from 2 weeks but still no success. I tried every method I could think of like AM-GM, C-S, Holder and more, but could not find a proof.</p> <p>Also, is it necessary for <span class="math-container">$n,m$</span> to be natural numbers ?</p> <p>Any help will be gratefully acknowledged.</p> <p>Thanks in advance ! :)</p>
Martin R
42,969
<p>Let $x = a^{\frac 1m} &gt; 1$. Using $$ x^{2n} - 1 = (x-1)(1+ x+x^2 + \ldots + x^{2n-1}) \\ x^{n+1} - x^{n-1} = (x-1) (x^{n-1}+x^n) $$ we get $$ x^{2n} - 1 - n(x^{n+1} - x^{n-1}) = (x-1)\left( 1+ x+x^2 + \ldots + x^{2n-1} - n(x^{n-1}+x^n) \right) \\ = (x-1)\sum_{k=1}^n \left( x^{k-1} + x^{2n-k} - x^{n-1}-x^n\right) \\ = (x-1)\sum_{k=1}^n (x^{n-k}-1)(x^n - x^{k-1}) \\ \ge 0 $$ (with strict inequality for $n \ge 2$), which is the desired inequality $$ x^{2n} - 1 \ge n(x^{n+1} - x^{n-1}) \, . $$ This proof works for positive real $m$ and integer $n \ge 1$. For $x &lt; 1$ the same inequality with $\ge$ replaced by $\le$ holds.</p>
2,307,021
<p>I am struggling with a confusing differentials' problem. It seems like there is a key piece of information missing:</p> <p><strong>The problem:</strong></p> <blockquote> <p>The electrical resistance $ R $ of a copper wire is given by $ R = \frac{k}{r^2} $ where $ k $ is a constant and $ r $ is the radius of the wire. Suppose that the radius has an error of $ \pm 5\% $, find the $\%$ error of $ R $.</p> </blockquote> <p><strong>My solution:</strong></p> <p>\begin{align*} R &amp;= \frac{k}{r^2}\\ \frac{dR}{dr} &amp;= k \cdot (-2) \cdot r^{-3} \quad \therefore \quad dR = \frac{-2k \cdot 0.05}{r^3} = \frac{-0.1k}{r^3}\\ \end{align*}</p> <p>So the percentage error is given by</p> <p>\begin{align*} E_\% = \frac{\frac{-0.1k}{r^3}}{\frac{k}{r^2}} = - \frac{0.1}{r} \end{align*}</p> <p><strong>My question:</strong> Am I missing something? Should I have arrived in a real value (not a function of $ r $ )? Is there information missing on the problem?</p> <p>Thank you.</p>
Archis Welankar
275,884
<p>Taking logs we have $ln (R)=ln (k)-ln (r^2) $ thus differentiating we have $\frac {dR}{R}=-2\frac {dr}{r} $ now multiplying by $100$ we have $\text {percent error in R}=-2\text {percent error in r}$ thus $\text {percent error in R}=- (\pm 10)$(as radius increases resistance decreases and vice-versa.)</p>
1,903,717
<p>This is actually from an Analysis text but i feel its a set theory question.</p> <p>Proposition for ever rational number $\epsilon &gt; 0$ there exists a non-negative number x s.t $x^2 &lt; 2 &lt; (x+ \epsilon )^2 $</p> <p>It provides a proof that im having trouble understanding.</p> <p>Proof: let $ \epsilon &gt;0$ be rational. Suppose for contradiction sake that there is no on-negative rational number x that $x^2 &lt; 2 &lt; (x+ \epsilon )^2 $ holds.</p> <p>ie when every $ x^2 &lt; 2$ the statement $(x+ \epsilon )^2 &lt;2 $</p> <p>It states by a previous proposition that $(x+ \epsilon )^2 $ cannot equal 2.</p> <p><strong>Then it states "Since $0^2 &lt; 2$ we thus have $ \epsilon ^2 &lt; 2$ which then implies that $ (2\epsilon )^2 &lt; 2$ and indeed a simple induction shows that $ (n\epsilon )^2 &lt; 2$ for every natural number n." Which is what i cant understand.</strong></p> <p>The rest of the proof is strange as well im fine with the statement $ \epsilon ^2 &lt; 2$ as it clearly follows that $ \epsilon ^2 &lt; (x+ \epsilon )^2 $ as x is positive and $ \epsilon ^2$ is on both sides of the expression.</p> <p>If i was proving it then i would rewrite </p> <p>$ \epsilon ^2 = n $ $ \epsilon' $ s.t $n \in \mathbb {N} $ and $ \epsilon' \in \mathbb {Q} $ </p> <p>i would then use the Archimedean property to prove this is a contradiction. </p> <p>If anyone can follow/explain what the bold text means i would greatly appreciate it.</p>
Noah Schweber
28,111
<p>Before the bolded passage, you've concluded that if the statement you're trying to prove <em>fails</em>, then it must be the case that $x^2&lt;2$ implies $(x+\epsilon)^2&lt;2$. </p> <p>Now, just take $x=0$. $0^2=0&lt;2$, so we must have $(0+\epsilon)^2=\epsilon^2&lt;2$. </p> <p>So $\epsilon^2&lt;2$. Now take $x=\epsilon$. Since $\epsilon^2&lt;2$, we have $(\epsilon+\epsilon)^2=(2\epsilon)^2&lt;2$.</p> <p>Now take $x=2\epsilon$ . . .</p> <p>More generally, having shown that $(n\epsilon)^2&lt;2$, we may now take $x=n\epsilon$ and conclude that $(x+\epsilon)^2=((n+1)\epsilon)^2&lt;2$. So, by induction, $(n\epsilon)^2&lt;2$ for every natural number $n$. Since $\epsilon&gt;0$, this contradicts the Archimedean property.</p> <p>From your question, I think you're a little confused about what exactly is being proved by induction here. The goal is to prove $$(*)\quad \mbox{For each natural number $n$, $(n\epsilon)^2&lt;2$.}$$ Note that this is <em>not</em> an induction on $\epsilon$ - it's an induction on <em>the coefficient of</em> $\epsilon$. The base case is $n=1$ (that is, showing $\epsilon^2&lt;2$), and the inductive step is as described above.</p>
2,025,934
<p>May $V$ be an $n$ dimensional Vektorspace such that $\dim (V) =: n \ge 2$.</p> <p>We shall prove, that there are infinitely many $k$-dimensional subspaces of $V$, $\forall k \in \{1, 2, ..., n-1\}$.</p> <p>So first, I thought about using induction, the base step is not that hard, for $n=2$ we take two vectors, say $a$ and $b$ and define infinitely many 1-dimensional subspaces as span$\{a+jb\}$ for $j \in \mathbb N$.</p> <p>It is easy to see those vector spaces are not all equal, but I kinda realised that induction is not the way to go, as I think $n$ is fixed.</p> <p>Anyhow, then I thought about using finiteness of basis for $V$ to try to construct those subspaces (using vectors from basis). I failed to do so, so I'm just asking for a hint or any useful advice where to start with this.</p>
Alex M.
164,025
<p>In fact, induction does work, even though there are other, more direct approaches. I am going to assume that $V$ is a vector space over some infinite field, otherwise your result is false.</p> <p>If $n=2$, then $k=1$: consider then all the straight lines passing through the origin; it is obvious and it doesn't require a proof that they are infinitely many.</p> <p>Assume the statement true for $n$ and consider $V$ with $\dim V = n+1$.</p> <ol> <li><p>The case $k &lt; n$: if there were only finitely many subspaces of dimension $k$, then let $W \subset V$ be a subspace of $\dim W = n$; by the induction hypothesis, $W$ has infinitely many subspaces of dimension $k$, but these are also subspaces of $V$, which were assumed to be finitely many - so we have obtained a contradiction. This is the part where we use induction.</p></li> <li><p>The case $k=n$: if you choose a basis, then each $v = (v_1, \dots, v_n)$ will give rise to the linear form $f_v (u) = u_1 v_1 + \dots + u_n v_n$. Notice that $\dim (\ker f_v) = n$ and that $\ker f_v = \ker f_w$ if and only if there exist $\lambda$ such that $v = \lambda w$. But there are infinitely many non-proportional vectors in $V$ (they are in bijective correspondence with the projective space of $V$), so there are infinitely many different linear forms, therefore infninitely many subspaces of dimension $n$.</p></li> </ol>
1,073,459
<p>I'd like to better understand states on C*-algebras.</p> <p>What properties should I investigate and in which order?</p> <p><em>(Positive functionals, extremal states, Schwarz's inequality, Kadison's inequality, what else?)</em></p> <p>I suppose basic facts about functional analysis.</p> <p><em>(C*-algebras, spectral theory, functional calculus, Banach-Alaoglu, etc.)</em></p> <p>Thanks alot for your ideas!! :)</p>
Simon S
21,495
<p>If you were integrating over the volume</p> <p>$$\left(\frac{x}{a}\right)^{2} + \left(\frac{y}{b}\right)^{2} + \left(\frac{z}{c}\right)^{2} \leq 1$$</p> <p>you would use spherical polars with $x = ar\sin\theta\cos\phi$, $y = br...$, $z = cr...$.</p> <p>Now try and modify those so they fit your shape by taking the appropriate powers. Then calculate the Jacobian....</p>
1,073,459
<p>I'd like to better understand states on C*-algebras.</p> <p>What properties should I investigate and in which order?</p> <p><em>(Positive functionals, extremal states, Schwarz's inequality, Kadison's inequality, what else?)</em></p> <p>I suppose basic facts about functional analysis.</p> <p><em>(C*-algebras, spectral theory, functional calculus, Banach-Alaoglu, etc.)</em></p> <p>Thanks alot for your ideas!! :)</p>
Mark McClure
21,361
<p>Let's write \begin{align} x &amp;= a(\rho\sin(\phi)\sin(\theta))^3 \\ y &amp;= b(\rho\sin(\phi)\cos(\theta))^3 \\ z &amp;= c(\rho\cos(\phi))^3, \end{align} for then, $(x/a)^{2/3} + (y/b)^{2/3} + (z/c)^{2/3} = \rho^2$. Furthermore, the Jacobian of change of variables is $$648 \rho ^8 \sin ^2(\theta ) \cos ^2(\theta ) \sin^5(\varphi ) \cos ^2(\varphi ).$$ Thus, the volume can be computed as $$8\int _0^1\int _0^{\frac{\pi }{2}}\int _0^{\frac{\pi }{2}}27 a b c \rho ^8 \cos ^2(\theta ) \cos ^2(\varphi ) \sin ^2(\theta ) \sin ^5(\varphi ) \rho ^2d\varphi d\theta d\rho = 8\frac{9}{770} \pi a b c.$$</p> <hr> <p>You can also use the parametrization to visualize the object. Here's what it looks like for $a=b=3$ and $c=2$.</p> <p><img src="https://i.stack.imgur.com/3ov8z.png" alt="enter image description here"></p>
1,619,292
<p>Let $\mathbf C$ be an abelian category containing arbitrary direct sums and let $\{X_i\}_{i\in I}$ be a collection of objects of $\mathbf C$. </p> <p>Consider a subobject $Y\subseteq \bigoplus_{i\in I}X_i$ and put $Y_i:=p_i(Y)$ where $p_i:\bigoplus_{i\in I}X_i\longrightarrow X$ is the obvious projection. </p> <p>Is $Y$ a subobject of $\bigoplus_{i\in I}Y_i$?</p> <p>This seems so obvious, but I can't seem to be able to prove it. </p>
Martin Peters
185,067
<p>My recommendations are:</p> <blockquote> <p>Ivar Ekeland: <em>The broken dice, and other mathematical tales of chance</em></p> <p>Vladimir Arnold: <em>Catastrophe Theory</em>.</p> </blockquote>
2,581,135
<blockquote> <p>Find: $\displaystyle\lim_{x\to\infty} \dfrac{\sqrt{x}}{\sqrt{x+\sqrt{x+\sqrt{x}}}}.$</p> </blockquote> <p>Question from a book on preparation for math contests. All the tricks I know to solve this limit are not working. Wolfram Alpha struggled to find $1$ as the solution, but the solution process presented is not understandable. The answer is $1$.</p> <p>Hints and solutions are appreciated. Sorry if this is a duplicate.</p>
Stefan4024
67,746
<p>Divide by $\sqrt{x}$ to get </p> <p>$$\lim_{x \to \infty} \dfrac{\sqrt{x}}{\sqrt{x+\sqrt{x+\sqrt{x}}}} = \lim_{x \to \infty} \frac{1}{\sqrt{1 + \sqrt{\frac 1x + \sqrt{\frac{1}{x^3}}}}} = 1$$</p>
3,842,653
<p>I'm working on a problem for a class and I'm a bit confused on what exactly the question is asking, the question is as follows,</p> <p>Suppose <span class="math-container">$(x_n)$</span> is a sequence in <span class="math-container">$\Bbb R$</span>. Prove that <span class="math-container">$\bigl\{a \in \Bbb R : \text{there is a subsequence }(x_{n_{k}}) \text{ with } (x_{n_{k}}) \to a \bigr\}= \bigcap^{\infty}_{n=1} \overline{\{ x_n,x_{n+1},x_{n+2},...\}}$</span></p> <p>I know that this involves dealing with the closure of a set which we defined as,</p> <p>Suppose <span class="math-container">$A\subseteq R$</span>, we define the closure of <span class="math-container">$A$</span> denoted by <span class="math-container">$\overline{A}$</span> by <span class="math-container">$\overline{A}=\{x \in R: \exists \ (a_n)$</span> in <span class="math-container">$A$</span> such that <span class="math-container">$(a_n) \to x \}$</span>.</p> <p>Thanks for the help in advance as I'm confused on exactly what to show.</p> <p>Edit note: Had to adjust as R is the set of the reals.</p>
NicholasLP
829,598
<p>I think that you need to prove that the set of the <span class="math-container">$a$</span>'s, defined as the limits of subsequences of <span class="math-container">$x_n$</span>, is equal to the infinite intersections of the closures of the sets defined by the subsequences <span class="math-container">$\{ x_n, x_{n+1}, x_{n+2} \}$</span></p>
2,968,655
<p>My numerical calculations suggest that the equation <span class="math-container">$$x = \frac{1}{1+e^{-a+bx}}$$</span> has a unique solution for any <span class="math-container">$a,b \in \mathbb R$</span>. How would one go about showing this?</p>
B. Goddard
362,009
<p>(Someday, I'll learn how to include pictures on here.)</p> <p>Let <span class="math-container">$r=e^{-a}$</span> and note that <span class="math-container">$r$</span> is positive.</p> <p>We solve the equation for <span class="math-container">$e^{bx$}$</span> to get</p> <p><span class="math-container">$$e^{bx} = \frac{1-x}{rx}.$$</span></p> <p>Short answer is "look at the graphs of each side."</p> <p>The left side is always positive. The right side is positive only for <span class="math-container">$0&lt;x&lt;1$</span>. In that interval its derivative is negative, so it's strictly decreasing. The right side is either strictly increasing or strictly decreasing. So there is only one possible intersection point of the two curves.</p>
2,179,289
<p>Every valuation ring is an integrally closed local domain, and the integral closure of a local ring is the intersection of all valuation rings containing it. It would be useful for me to know when integrally closed local domains are valuation rings.</p> <p>To be more specific,</p> <blockquote> <p>is there a property $P$ of unitary commutative rings that is strictly weaker than being a valuation ring, such that an integrally closed local domain is a valuation ring iff it satisfies the property $P$.</p> </blockquote>
mzafrullah
60,902
<p>Note the following statements.</p> <p>I. A quasi local domain $(D,M)$ is a valuation domain if and only if $D$ is a Bezout domain (i.e. for every pair $a,b$ in $D,$ the ideal $(a,b)$ is principal or, equivalently, every finitely generated ideal of $D$ is principal).</p> <p>If $D$ is a valuation domain then as for each pair $a,b$ in $D$ we have $a|b$ or $b|a$, giving $(a,b)=(a)$ or $(a,b)=(b)$, that is $D$ is Bezout.</p> <p>Conversely, take $a,b$ in $D.$ If either of $a,b$ is zero or a unit $a|b$ or $b|a.$ So, let both $a,b$ be nonzero non units. Since $D$ is Bezout, $% (a,b)=(c)$ for some $c$ in $D.$ Clearly $c|a,b.$ Let $a=a^{\prime }c$ and $% b=b^{\prime }c.$ Substituting, we get $(a^{\prime }c,b^{\prime }c)=(c)$. Canceling $c$ from both sides we get $(a^{\prime },b^{\prime })=D.$ As in a quasi local domain nonzero non units generate a proper ideal, at least one of $a^{\prime },b^{\prime }$ is a unit. So, $a^{\prime }|b^{\prime }$ or $% b^{\prime }|a^{\prime }$ leading to $a^{\prime }c|b^{\prime }c$ or $% b^{\prime }c|a^{\prime }c$ and to $a|b$ or $b|a.$</p> <p>II. A quasi local domain $(D,M)$ is a valuation domain if and only if $D$ is a Prufer domain (every two generated nonzero ideal is invertible or , equivalently, every finitely generated nonzero ideal is invertible).</p> <p>Follows from I. once we note that in a quasi local domain each invertible ideal is principal.</p> <p>Note that P: $D$ is Bezout or P: $D$ is Prufer both are non-trivial in that there are Bezout (resp., Prufer) domains that are not valuation domains. So perhaps that would suffice as an answer.</p> <p>Now the above two results do not require the domain $(D,M)$ to be integrally closed and you are asking for a property P such that $(D,M)$ is a valuation domain. Here is the exact property P: Every finitely generated nonzero ideal of $D$ is a $v$-ideal (i.e. a divisorial ideal). So we have the statement.</p> <p>III. An integrally closed quasi local domain $(D,M)$ is a valuation domain if and only if every nonzero finitely generated ideal of $D$ is a $v$-ideal.</p> <p>For the proof look up Theorem 8, on pages 1710-1711, of an old paper of mine: [Z] The $v$-operation and intersections of quotient rings of integral domains, Comm. Algebra, 13 (8) (1985) 1699-1712.</p> <p>The cited theorem says: An integrally closed fgv domain is a Prufer domain. </p> <p>Now fgv domain is a fancy name for a domain whose nonzero finitely generated ideals are divisorial. Indeed as every invertible ideal is divisorial the converse of Theorem 8 of [Z] is also true. You can also get information on divisorial ideals (i.e. $v$-ideals) from [Z] or sources mentioned there.</p> <p>Proof of III. Let $(D,M)$ be an integrally closed domain such that every nonzero finitely generated ideal of $D$ is divisorial. Then $D$ is a Prufer domain by Theorem 8 of [Z] and by II. above $D$ is a valuation domain. Conversely let $(D,M)$ be a valuation domain then every nonzero finitely generated ideal of $D$ is principal and so divisorial.</p> <p>Note:If you would rather follow Hagen' suggestion here's how to go about it. Note that a nonzero ideal $A$ is a $t$-ideal if for each finitely generated nonzero ideal $I$ contained in $A$ the $v$-image (I sub v) is also contained in $A$. So Hagen wants you to use P: $(D,M)$ is such that $D$ satisfies FC and $M$ is a $t$-ideal. An easy way to see what Hagen means is look up Lemma 5 of the finite conductor domains paper mentioned above by him. </p>
2,828,205
<p><a href="https://i.stack.imgur.com/JJRaZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JJRaZ.png" alt="enter image description here"></a></p> <p>First we take identity element from set which is Identity matrix so S=I for which b(σ(x),σ(y))=b(x,y) which is identity transformation in O(V,b) so kernal becomes identity and so it is surjective since it is finite dimensional. Is it correct or not please guide regarding this.</p>
Federico Fallucca
531,470
<p>$O(V,b)$ is the set of linear maps $\sigma$ that verify $b(\sigma(x),\sigma(y))=b(x,y)$ for every $x,y\in V$ and $B$ is the matrix associated to $b$ in the canonical base (for example) Now you can prove that when you fix a base on $V$ </p> <p>$b(x,y)=x^tBy$</p> <p>And so if you consider $A_\sigma$ the matrix associated to $\sigma \in O(V,b)$ in that base, than</p> <p>$b(\sigma(x),\sigma(y))=x^t(A_\sigma^tBA_\sigma) y=b(x,y)=x^tBy$</p> <p>So $A_\sigma$ verify the property $ x^t(A_\sigma^tBA_\sigma)y=x^tBy$ for every $x,y\in V$</p> <p>and in this case it is simple prove that you get that $A_\sigma$ verify the property </p> <p>$A_\sigma^t BA_\sigma=B$</p> <p>Now if $b$ is a definite bilinear form than you have the property $y^tBy\neq 0$ for every $y\in V/ \{0\}$ and so if $det(A_\sigma)=0$ than there exist an $x\in V/ \{0\}$ such that $A_\sigma x=0$ but $x^tBx =x^t(A_\sigma^tBA_\sigma)x= x^t(A_\sigma^tB)(A_\sigma x)=0$ and it is impossibile.</p> <p>So you have also that $det(A_\sigma)\neq 0$ .</p> <p>Now if you define $O(V,B)=\{A\in M_n(\mathbb{R}) : det(A)\neq 0, A^tBA=B\}$</p> <p>there exists a map $\Psi: O(V,b) \to O(V,B)$ that maps every $\sigma\in O(V,b)$ to $A_\sigma$ </p> <p>This map is oviously bijective and your sets are also a Groups and so it is important to ask oneself if it is also a morphism of group. </p>
1,204,745
<p>Let $(\Omega, A, \mathbb{P} )$ be a probability space. Let $f: \Omega \rightarrow [-\infty, \infty]$ an $A$-measurable function. </p> <p>If $f$ is bounded on the positive side and unbounded on the negative side. Is it possible that $\mathbb{E}[f]$ (the expectation with probability measure $\mathbb{P}$ ) is finite?</p> <p>and what if $f$ is unbounded on the 2 sides ?</p>
Zach466920
219,489
<p>For your first question, possibly. As for your second question. Yes, it can be finite, most symetric distributions are examples of this. The measure is bounded, but the space can be infinite.</p>
3,457,876
<p>How should I even begin to attempt to show that: <span class="math-container">$$\frac{\|\bf{x} - \tilde{x} \|}{\|\bf{x}\|} \leq \frac{cond(\bf{A})}{1 - \|\bf{A}^{-1} (\bf{A} - \bf{\tilde{A}}) \|} \left( \frac{\|\bf{b} - \bf{\tilde{b}} \|}{\|\bf{b}\|} + \frac{\|\bf{A} - \bf{\tilde{A}} \|}{\|\bf{A}\|} \right)$$</span> with <span class="math-container">$\bf{Ax = b}$</span> and <span class="math-container">$\bf{\tilde{A}\tilde{x} = \tilde{b}}$</span> for invertible real <span class="math-container">$n \times n$</span> matrices <span class="math-container">$\bf{A}$</span> and <span class="math-container">$\tilde{\bf{A}}$</span>; and the vectors are elements of <span class="math-container">$\mathbb{R}^n$</span> ? Any hint is much appreciated.</p> <p>Note that the norm <span class="math-container">$\| \:.\|$</span> is just any (consistent) norm.</p>
Algebraic Pavel
90,996
<p><strong>HINT:</strong> From <span class="math-container">$$\begin{split} x-\tilde{x} &amp;= A^{-1}b-\tilde{A}^{-1}\tilde{b} \\&amp;=A^{-1}b-\tilde{A}^{-1}b+\tilde{A}^{-1}b-\tilde{A}^{-1}\tilde{b} \\&amp;=\tilde{A}^{-1}(\tilde{A}-A)A^{-1}b+\tilde{A}^{-1}(b-\tilde{b}) \\&amp;=\tilde{A}^{-1}[(\tilde{A}-A)x+(b-\tilde{b})], \end{split} $$</span> and <span class="math-container">$\|b\|\leq\|A\|\|x\|$</span> we have that <span class="math-container">$$ \frac{\|x-\tilde{x}\|}{\|x\|} \leq \|\tilde{A}^{-1}\|\|A\|\left(\frac{\|\tilde{A}-A\|}{\|A\|}+\frac{\|b-\tilde{b}\|}{\|b\|}\right). $$</span></p> <p>Now you just need to make up a bound on <span class="math-container">$\|\tilde{A}^{-1}\|\|A\|$</span>.</p>
3,457,876
<p>How should I even begin to attempt to show that: <span class="math-container">$$\frac{\|\bf{x} - \tilde{x} \|}{\|\bf{x}\|} \leq \frac{cond(\bf{A})}{1 - \|\bf{A}^{-1} (\bf{A} - \bf{\tilde{A}}) \|} \left( \frac{\|\bf{b} - \bf{\tilde{b}} \|}{\|\bf{b}\|} + \frac{\|\bf{A} - \bf{\tilde{A}} \|}{\|\bf{A}\|} \right)$$</span> with <span class="math-container">$\bf{Ax = b}$</span> and <span class="math-container">$\bf{\tilde{A}\tilde{x} = \tilde{b}}$</span> for invertible real <span class="math-container">$n \times n$</span> matrices <span class="math-container">$\bf{A}$</span> and <span class="math-container">$\tilde{\bf{A}}$</span>; and the vectors are elements of <span class="math-container">$\mathbb{R}^n$</span> ? Any hint is much appreciated.</p> <p>Note that the norm <span class="math-container">$\| \:.\|$</span> is just any (consistent) norm.</p>
Dat Minh Ha
690,489
<p>I might edit this answer into a proper one once I'm a bit more free, but for now, please accept these screenshots since I don't have enough time to convert my custom LaTeX commands into normal ones. <a href="https://i.stack.imgur.com/dBzi1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dBzi1.png" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/R26cz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/R26cz.png" alt="enter image description here"></a></p>
2,613,410
<blockquote> <p>What is the value of <span class="math-container">$2x+3y$</span> if</p> <p><span class="math-container">$x+y=6$</span> &amp; <span class="math-container">$x^2+3xy+2y=60$</span> ?</p> </blockquote> <p>My trial: from given conditions: substitute <span class="math-container">$y=6-x$</span> in <span class="math-container">$x^2+3xy+2y=60$</span> <span class="math-container">$$x^2+3x(6-x)+2(6-x)=60$$</span> <span class="math-container">$$x^2-8x+24=0$$</span> <span class="math-container">$$x=\frac{8\pm\sqrt{8^2-4(1)(24)}}{2(1)}=4\pm2i\sqrt2$$</span> this gives us <span class="math-container">$y=2\mp2i\sqrt2$</span> we now have <span class="math-container">$x=4+2i\sqrt2, y=2-2i\sqrt2$</span> or <span class="math-container">$x=4-2i\sqrt2, y=2+2i\sqrt2$</span></p> <p>substituting these values i got <span class="math-container">$2x+3y=14-2i\sqrt2$</span> or <span class="math-container">$$2x+3y=14+2i\sqrt2$$</span></p> <p>But my book suggests that <span class="math-container">$2x+3y$</span> should be a real value that I couldn't get. Can somebody please help me solve this problem? Is there any mistake in the question.?</p> <p>Thank you.</p>
Community
-1
<p>Assuming a typo ($y$ instead of $y^2$), we restore</p> <p>$$\begin{cases}x+y=6,\\x^2+3xy+2y^2=(x+y)(x+2y)=60,\end{cases}$$ then</p> <p>$$2x+3y=\frac{60}6+6.$$</p>
4,403,081
<p><span class="math-container">$$\int \dfrac{dx}{x\sqrt{x^4-1}}$$</span></p> <p>I need to solve this integration. I solved and got <span class="math-container">$\dfrac12\tan^{-1}(\sqrt{x^4-1}) + C$</span>, however the answer given in my textbook is <span class="math-container">$\dfrac12\sec^{-1}(x^2) + C$</span></p> <p>How can I prove that both quantities are equal? Is there something wrong with my answer?</p> <p><strong>EDIT:</strong></p> <p>Here's my work: <span class="math-container">$$\int\dfrac{dx}{x\sqrt{x^4-1}}= \dfrac{1}{4}\int\dfrac{4x^3 dx}{x^4\sqrt{x^4-1}}$$</span></p> <p>Let <span class="math-container">$x^4 - 1 = t^2$</span> <span class="math-container">$$\dfrac{1}{2}\int\dfrac{dx}{1 + t^2}$$</span></p> <p><span class="math-container">$$\dfrac12 \tan^{-1}(\sqrt{x^4 -1 }) + C$$</span></p>
B. Goddard
362,009
<p>I tell my students that inverse trig functions are <em>angles.</em> So if you write</p> <p><span class="math-container">$$\tan^{-1}\sqrt{x^4-1} = \theta,$$</span></p> <p>then</p> <p><span class="math-container">$$\tan\theta = \sqrt{x^4-1}.$$</span></p> <p>A right triangle that tells this story has <span class="math-container">$\theta$</span> as one angle, <span class="math-container">$\sqrt{x^4-1}$</span> as the opposite side and <span class="math-container">$1$</span> as the adjacent side. Using Pythagorean theorem we can work out the length <span class="math-container">$c$</span> of the hypotenuse:</p> <p><span class="math-container">$$(\sqrt{x^4-1})^2+1^2 = c^2$$</span></p> <p>which shows that <span class="math-container">$c=x^2$</span>.</p> <p>So <span class="math-container">$\sec \theta = x^2/1$</span>, that is <span class="math-container">$\sec^{-1}(x^2) = \theta = \tan^{-1}( \sqrt{x^4-1}).$</span></p>
4,474,806
<p>I use the following method to calculate <span class="math-container">$b$</span>, which is <span class="math-container">$a$</span> <strong>increased</strong> by <span class="math-container">$x$</span> percent:</p> <p><span class="math-container">$\begin{align} a = 200 \end{align}$</span></p> <p><span class="math-container">$\begin{align} x = 5\% \text{ (represented as } \frac{5}{100} = 0.05 \text{)} \end{align}$</span></p> <p><span class="math-container">$\begin{align} b = a \cdot (1 + x) \ = 200 \cdot (1 + 0.05) \ = 200 \cdot 1.05 \ = 210 \end{align}$</span></p> <p>Now I want to calculate <span class="math-container">$c$</span>, which is also <span class="math-container">$a$</span> but <strong>decreased</strong> by <span class="math-container">$x$</span> percent.</p> <p>My instinct is to preserve the method, but to use division instead of multiplication (being the inverse operation):</p> <p><span class="math-container">$ \begin{align} c = \frac{a}{1 + x} \ = \frac{200}{1 + 0.05} \ = \frac{200}{1.05} \ = 190.476190476 \ \end{align} $</span></p> <p>The result looks a bit off? But also interesting as I can multiply it by the percent and I get back the initial value (<span class="math-container">$190.476190476 \cdot 1.05 = 200$</span>).</p> <p>I think the correct result should be 190 (without any decimal), using:</p> <p><span class="math-container">$ \begin{align} c = a \cdot (1 - x) \ = 200 \cdot (1 - 0.05) \ = 200 \cdot 0.95 \ = 190 \end{align} $</span></p> <p>What's the difference between them? What I'm actually calculating?</p>
GEdgar
442
<p>It is true that: increase by <span class="math-container">$x$</span> percent then decrease the result by <span class="math-container">$x$</span> percent does not get you back where you started.</p> <p>Let's do a case where it is clearer, say <span class="math-container">$x=100$</span>. Start with <span class="math-container">$20$</span>. Increase by <span class="math-container">$100$</span> percent. Well, <span class="math-container">$100$</span> percent of <span class="math-container">$20$</span> is exactly <span class="math-container">$20$</span>, so this means increase by <span class="math-container">$20$</span>. The result is <span class="math-container">$$ 20 + 20 = 40 . $$</span></p> <p>Now, starting at that <span class="math-container">$40$</span>, let's decrease that by <span class="math-container">$100$</span> percent. Well, <span class="math-container">$100$</span> percent of <span class="math-container">$40$</span> is <span class="math-container">$40$</span>, so we have to decrease by <span class="math-container">$40$</span>. The result is: <span class="math-container">$$ 40 - 40 = 0 . $$</span> So we certainly did not arrive back where we started.</p> <p>Look at this. We increased by <span class="math-container">$100$</span> percent of <span class="math-container">$20$</span>, then decreased by <span class="math-container">$100$</span> percent of <span class="math-container">$40$</span>. Of course <span class="math-container">$100$</span> percent of <span class="math-container">$20$</span> is not the same as <span class="math-container">$100$</span> percent of <span class="math-container">$40$</span>. So the amount increased is not the same as the amount decreased.</p>
1,296,420
<p>I was trying to find an example such that $G \cong G \times G$, but I am not getting anywhere. Obviously no finite group satisfies it. What is such group?</p>
Seirios
36,434
<p>I think it is an open problem whether or not there exists a finitely presented group $G$ satisfying $G \simeq G \times G$. However, several such finitely generated groups are known. Probably the first example was given by Jones in <a href="http://journals.cambridge.org/download.php?file=%2FJAZ%2FJAZ17_02%2FS144678870001675Xa.pdf&amp;code=7ef937f1b5042fe29ea5704c1f57a0b3" rel="nofollow">Direct products and the Hopf property</a>.</p>
2,658,195
<p>I have the following problem with which I cannot solve. I have a very large population of birds e.g. 10 000. There are only 8 species of birds in this population. The size of each species is the same.</p> <p>I would like to calculate how many birds I have to catch, to be sure in 80% that I caught one bird of each species.</p>
Community
-1
<p>First of all, you procedure is exactly like flipping a coin and getting heads or tails, so you should phrase your question that way. If you were only to throw the coin three times, everyone would have an equal chance of winning, but because you throw continuesly until someone gets their letters, certain letters have an advantage over other letters. For example, DHH has an advantage over HHH (because of you get two heads, there is a chance a tail was thrown before). In general, for any letters ABC, you can choose letters C'AB to win the other player (c' is the opposite of c). Here is a link to numberphile's video explaining this in further detail: <a href="https://youtu.be/SDw2Pu0-H4g" rel="nofollow noreferrer">https://youtu.be/SDw2Pu0-H4g</a></p>
2,399,842
<p>I ran $\frac{d^n}{dx^n}[(x!)!]$ through <em><a href="https://www.wolframalpha.com/input/?i=nth%20derivative%20of%20(x!)!" rel="nofollow noreferrer">Wolfram|Alpha</a></em>, which returned</p> <blockquote> <p>$$\frac{\partial^n(x!)!}{\partial x^n} = \Gamma(1+x!)\,R(n,1+x!)$$ for</p> <ul> <li><p>$R(n,x)=\psi(x)\,R(-1+n,x)+R^{(0,1)}(-1+n,x)$</p></li> <li><p>$R(0,x)=1$</p></li> <li><p>$n\in\mathbb{Z}$</p></li> <li><p>$n&gt;0$</p></li> </ul> <p>where $\psi^{(n)}(x)$ is the $n$<sup>th</sup> derivative of the digamma function</p> </blockquote> <p>They define the <a href="http://mathworld.wolfram.com/PolygammaFunction.html" rel="nofollow noreferrer">polygamma function</a> as $$\psi^{(n)}(x)=\frac{d^{n+1}}{dx^{n+1}}\ln[\Gamma(x)]$$</p> <p>What on Earth is $R^{(0,1)}$, and how can I make sense of this $R(n,x)$ business?</p>
Simply Beautiful Art
272,831
<p>Define $f_n(x)$ as follows and show that$$f_n(x)=\Gamma(x)R(n,x)=\frac d{dx}\Gamma(x)R(n-1,x)=f'_{n-1}(x)\\f_0(x)=\Gamma(x)$$</p> <p>From this, you can see that we actually have</p> <p>$$f_n(x)=\Gamma^{(n)}(x)$$</p> <p>And particularly,</p> <blockquote> <p>$$R(n,x)=\frac{f_n(x)}{\Gamma(x)}=\frac{\Gamma^{(n)}(x)}{\Gamma(x)}$$</p> </blockquote>
1,335,950
<p>I have the following sum ($n\in \Bbb N)$: $$ \frac {1}{1 \times 4} + \frac {1}{4 \times 7} + \frac {1}{7 \times 10} +...+ \frac {1}{(3n - 2)(3n + 1)} \tag{1} $$ It can be proved that the sum is equal to $$ \frac{n}{3n + 1} \tag{2}$$ My question is, how do I get the equality? I mean, if I hadn't knew the formula $(2)$, how would I derive it?</p>
Satish Ramanathan
99,745
<p>Hint:</p> <p>Use partial fraction of the ratio</p> <p>$\frac{1}{(3n-2)(3n+1)}=\frac{1}{3}(\frac{1}{3n-2}-\frac{1}{3n+1})$</p> <p>You will see mass cancellation such as below: $\frac{1}{3}\left[1-\frac{1}{4}\right]$</p> <p>$\frac{1}{3}\left[\frac{1}{4}-\frac{1}{7}\right]$</p> <p>..</p> <p>$\frac{1}{(3n-2)(3n+1)}=\frac{1}{3}(\frac{1}{3n-2}-\frac{1}{3n+1})$</p> <p>Summing all you get $\left[\frac{1}{3}[1-(\frac{1}{3n+1})\right]$ Simplifying you get the result.</p>
1,335,950
<p>I have the following sum ($n\in \Bbb N)$: $$ \frac {1}{1 \times 4} + \frac {1}{4 \times 7} + \frac {1}{7 \times 10} +...+ \frac {1}{(3n - 2)(3n + 1)} \tag{1} $$ It can be proved that the sum is equal to $$ \frac{n}{3n + 1} \tag{2}$$ My question is, how do I get the equality? I mean, if I hadn't knew the formula $(2)$, how would I derive it?</p>
Arthur
250,056
<p>This is as nice task for induction, isn't it?</p> <p>for $n=1$ we clearly have</p> <p>$$\sum_{i=1}^1 \frac{1}{(3k-2)(3k+1)} = \frac{1}{4} = \frac{1}{3\cdot 1+1}$$</p> <p>The induction step is not so difficult as well - it was 2 lines on my paper.</p>
1,335,950
<p>I have the following sum ($n\in \Bbb N)$: $$ \frac {1}{1 \times 4} + \frac {1}{4 \times 7} + \frac {1}{7 \times 10} +...+ \frac {1}{(3n - 2)(3n + 1)} \tag{1} $$ It can be proved that the sum is equal to $$ \frac{n}{3n + 1} \tag{2}$$ My question is, how do I get the equality? I mean, if I hadn't knew the formula $(2)$, how would I derive it?</p>
Vikram
11,309
<p>To find the sum of $n$ terms of a series each term of which is composed of the reciprocal of the product of $r$ factors in arithmetical progression, the first factors of the several terms being in the same arithmetical progression, use following</p> <p>Write down the $n^{th}$ term, strike off a factor from the beginning, divide by the number of factors so diminished and by the common difference, change the sign and add a constant</p> <p>In the example given, $(1,4),(4,7),(7,10)$ are in A.P. difference being $\color{red}3$ between each number of any pair also $1, 4, 7$ are in A.P with the same difference. </p> <p>$$\therefore Sum=\frac{-1}{\color{red}3\times (3n+1)}+c$$</p> <p>When $n=1$ Sum=$\frac{1}{4} \Rightarrow c=\frac{1}{4}$</p> <p>$$\therefore Sum=\frac{-1}{3\times (3n+1)}+\frac{1}{4}=\frac{n}{3n+1}$$</p>
1,612,220
<p>This is an exercise page 7 from Sutherland's book Introduction to Metric and Topological Spaces.</p> <blockquote> <p>Suppose that <span class="math-container">$V,X,Y$</span> are sets with <span class="math-container">$V\subseteq X\subseteq Y$</span> and suppose that <span class="math-container">$U$</span> is a subset of <span class="math-container">$Y$</span> such that <span class="math-container">$X\setminus V=X\cap U$</span>.</p> <p>Prove that <span class="math-container">$V=X\cap(Y\setminus U)$</span>.</p> </blockquote> <p>My attempt:</p> <p>Let <span class="math-container">$x\in X\cap(Y\setminus U)$</span>. Then <span class="math-container">$x\in X$</span> and <span class="math-container">$x\in Y\setminus U$</span>. So, <span class="math-container">$x\in X$</span> and <span class="math-container">$x\notin U$</span>.</p> <p>Here the solution given by Sutherland's book argues differently. So I am wondering if I can say: If an element <span class="math-container">$x$</span> is in the set <span class="math-container">$X$</span> then we can write <span class="math-container">$x\in V$</span> and <span class="math-container">$x\in X\cap V$</span>.</p> <p>And continuing, we have <span class="math-container">$x\in V$</span> and <span class="math-container">$x\in X\cap V$</span> and <span class="math-container">$x\notin U$</span>.</p> <p>The last two relations can be eliminated. And hence, <span class="math-container">$x\in V$</span>.</p> <p>The second part of the proof is to prove conversely that <span class="math-container">$V\subseteq X\cap(Y\setminus U)$</span>.</p> <p>I am wondering if the first part of my proof is valid, especially the second sentence.</p>
gebruiker
145,141
<p>You start to reason in circles beyond the point "So I am wondering if I can say:..." </p> <p>You need to keep in mind what you are trying to do. Namely you want to show that $x\in X\cap (Y\backslash U)\implies x\in V$. So once you've showed that $x\in V$ you can just stop. If I were your teacher I would ask you to elaborate on why $x\in V$ if $x\in X$ and $x\not \in U$. (Can you do that?) It is true, however you just say it is. I think any teacher would want you to explain why.</p>
3,755,355
<p>I wanted to prove that every group or order <span class="math-container">$4$</span> is isomorphic to <span class="math-container">$\mathbb{Z}_{4}$</span> or to the Klein group. I also wanted to prove that every group of order <span class="math-container">$6$</span> is isomorphic to <span class="math-container">$\mathbb{Z}_{6}$</span> or <span class="math-container">$S_{3}$</span>.</p> <ol> <li><p>For the first one I tried to prove that <span class="math-container">$H$</span> (a random group of order 4) is cyclic or the Klein group, because if <span class="math-container">$H$</span> is cyclic I can prove that a cyclic group of order <span class="math-container">$n$</span> is isomorphic to <span class="math-container">$\mathbb{Z}_{n}$</span>. Because <span class="math-container">$H$</span> has order <span class="math-container">$4$</span> it's only possible for elements in <span class="math-container">$H$</span> to have order <span class="math-container">$1$</span>, <span class="math-container">$2$</span>, <span class="math-container">$4$</span> (Lagrange). Say that <span class="math-container">$H$</span> is not cyclic. Then all the elements need to have order <span class="math-container">$1$</span> or <span class="math-container">$2$</span>. Not all the elements can have order <span class="math-container">$1$</span> so there must be one element of order <span class="math-container">$2$</span>. Say that <span class="math-container">$b$</span> is an element with order <span class="math-container">$2$</span>. Then take <span class="math-container">$c$</span> an element not the unit element or <span class="math-container">$b$</span>. Then <span class="math-container">$H=\{e, b, c, bc \}$</span>, so <span class="math-container">$c$</span> must have order <span class="math-container">$2$</span> because otherwise <span class="math-container">$H$</span> would have an order bigger than <span class="math-container">$4$</span>. This is the Klein group.</p> </li> <li><p>I wanted to do the second one analogously but I can't make a proper proof out of it.</p> </li> </ol> <p>Can someone help and correct me? (I'm so sorry for my English mistakes but I'm really trying.)</p>
Nicky Hekster
9,605
<p>As @rain1 pointed out, we have a group <span class="math-container">$G=\{1,a,b,ab\}$</span>, where <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are different, commute and are not equal to <span class="math-container">$1$</span>. Let us call <span class="math-container">$ab=ba=c$</span>. Observe that <span class="math-container">$a \neq c$</span> and <span class="math-container">$b \neq c$</span>. Now look at <span class="math-container">$a^2$</span>. Then <span class="math-container">$a^2 \notin \{a,c\}$</span>, so either <span class="math-container">$a^2=1$</span> or <span class="math-container">$a^2=b$</span>. Symmetrically, either <span class="math-container">$b^2=1$</span> or <span class="math-container">$b^2=a$</span>. So there are <span class="math-container">$4$</span> cases to consider, but by symmetry in <span class="math-container">$a$</span> and <span class="math-container">$b$</span> this boils down to only <span class="math-container">$2$</span>. Firstly, <span class="math-container">$a^2=1$</span> and <span class="math-container">$b^2=1$</span>, in this case <span class="math-container">$G \cong V_4$</span>. And secondly, if <span class="math-container">$a^2=1$</span> and <span class="math-container">$b^2=a$</span>, then <span class="math-container">$b^4=1$</span> and <span class="math-container">$G \cong C_4$</span>. So no need of the structure theorem of abelian groups.<p> For groups of order <span class="math-container">$6$</span> you can proceed in a similar, but slightly more complicated way. Just applying elementary means. No Lagrange, no Cauchy.</p>
1,047,544
<p>I'm doing some research and I'm trying to compute a closed form for $ \mathbb{E}[ X \mid X &gt; Y] $ where $X$, $Y$ are independent normal (but not identical) random variables. Is this known?</p>
heropup
118,193
<p>Explicitly, we have for $X \sim \operatorname{Normal}(\mu_x, \sigma_x^2)$ and $Y \sim \operatorname{Normal}(\mu_y, \sigma_y^2)$, $$\operatorname{E}[X \mid X &gt; Y] = \int_{y=-\infty}^\infty \int_{x=y}^\infty x f_{X,Y}(x,y) \, dx \, dy$$ where $$f_{X,Y}(x,y) = \frac{1}{2\pi \sigma_x \sigma_y} \exp \biggl(-\frac{(x-\mu_x)^2}{2\sigma_x^2} - \frac{(y-\mu_y)^2}{2\sigma_y^2} \biggr)$$ is a bivariate normal density with zero correlation (since $X$, $Y$ are independent). This integral does not, for general parameters, have an elementary closed form.</p>
476,147
<p>I am working on a problem and I need help getting started. Any pointers would be greatly appreciate it</p> <p>My problem: Given a $50,000 purse and 20/20 hindsight, and a particular stock, what are the best buying and selling points if the the only requirement is to maximize net profit. The stock is a daily chart going back 12 months and there can be as many or as little buying and selling.</p> <p>Added Clarification:</p> <p>Commission and slippage: 0.5% Minimum holding period: 2 days Shorts not allowed. No margin allowed. </p> <p>How can I approach this problem? What algorithms can I use to solve this problem? At this point, I am looking for pointers to then google them or youtube them. I use Matlab.</p> <p>Thanks a lot for your help. </p>
Ross Millikan
1,827
<p>You want to own the stock any day it goes up and not on any day it goes down, assuming you are not worrying about commissions. So take the difference between today and tomorrow and use the sign of it to decide whether to buy or sell.</p>
1,224,202
<p>Does the following equation makes any sense at all?</p> <p>$$ \frac{1}{|X|\cdot|Y|}\sum\limits_{x \in X}\sum\limits_{y \in Y}\begin{cases} 1 &amp; \mathrm{if~} x &gt; y\\ 0.5 &amp; \mathrm{if~} x = y\\ 0 &amp; \mathrm{if~} x &lt; y \end{cases} $$</p> <p>For every comparison of $x$ and $y$, I want to add 1, 0.5 or 0 according to the case statements, and then multiply by the left part of the equation. Is it correct the way it is written? Is there any more beautiful way of designing that equation?</p>
Kitegi
120,267
<p>It looks correct to me. I don't know what you mean by something more beautiful, but you could use the sign function. $$\frac{1}{|X|\cdot |Y|} \sum_{\substack{x\in X \\ y \in Y}} \frac{1+\operatorname{sgn}(x-y)}{2} $$</p>
11,353
<p>Thinking about the counterintuitive <em>Monty Hall Problem</em> (stick or switch?), revisited in <a href="https://matheducators.stackexchange.com/a/11346/511">this ME question</a>, I thought I would issue a challenge:</p> <blockquote> <p>Give in one (perhaps long) sentence a convincing explanation of why <em>switching</em> is twice as likely to lead to winning as <em>sticking</em>.</p> </blockquote> <p>Assume the game assumptions are pre-stated and clear.</p> <p>The probabilities are not even close, so there should be a convincing explanation after all <a href="https://en.wikipedia.org/wiki/Monty_Hall_problem" rel="nofollow noreferrer">the discussion of this topic</a>, even though "1,000 Ph.D."s got it wrong (in 1990 when it first went viral).</p>
JTP - Apologise to Monica
64
<p>My preferred explanation - </p> <p>The key thing to understand is that MH knows the correct door. Say there were a thousand doors. Your chance of choosing the correct door is 1/1000. Now, MH has 999 doors, and after opening 998, there's one left. In effect, he has reduced all the chance, the .999 to that one door. Now, by switching, your chance of success is 999/1000 because your chance of being right (pre-switch) was always 1/1000, and wrong, 999/1000. </p>
562,802
<p>I have been recently investigating the sequence 1,11,111,... I found, contrary to my initial preconception, that the elements of the sequence can have a very interesting multiplicative structure. There are for example elements of the sequence that are divisible by primes like 7 or 2003.</p> <p>What I am interested in is for what numbers, other than 2 and 5 can we say that they divide no element of the sequence?</p>
N. S.
9,176
<p>As you can see in the answers to this <a href="https://math.stackexchange.com/questions/83932/proof-that-a-natural-number-multiplied-by-some-integer-results-in-a-number-with/83968#83968">question</a>, a number has a multiple of the form $111...1$ if the number is not divisible by $2$ and $5$ (i.e. relatively prime to 10). </p> <p>Conversely, if a number has a multiple of the form $111.11$ its multiple is not divisible by $2$ or $5$ since the last digit is $1$. Thus the number is relatively prime to $2$ and $5$.</p> <p>Conclusion: A number $n$ has a multiple of the form $11111....1$ if and only if the number is relatively prime to $10$.</p> <p><strong>P.S.</strong> Another interesting property. For any prime $p \neq 2,5$ it follows from Fermat Little Theorem that </p> <p>$$p|10^{p-1}-1 \,.$$</p> <p>From here it follows immediately that for any prime $p \neq 2,3,5$, $p|111...1$, where there are exactly $p-1$ ones...</p>
562,802
<p>I have been recently investigating the sequence 1,11,111,... I found, contrary to my initial preconception, that the elements of the sequence can have a very interesting multiplicative structure. There are for example elements of the sequence that are divisible by primes like 7 or 2003.</p> <p>What I am interested in is for what numbers, other than 2 and 5 can we say that they divide no element of the sequence?</p>
Ahaan S. Rungta
85,039
<p>Indeed, there are such nice properties. To start, consider the following exercise from <em>The Art and Craft of Problem Solving</em> by Paul Zeitz. </p> <p><strong>Example 1.2</strong>: There is an element in the sequence $ 7, 77, 777, \cdots $ that is divisible by $2003$. </p> <p><strong>Proof</strong>: We prove that even a stronger statement is true, in fact, one of the first $2003$ elements of the sequence is divisible by $2003$. Let us assume that the contrary is true. Then take the first $2003$ elements of the sequence and divide each of them by $2003$. As none of them is divisible by $200$, they will have have a remainder that is at least $1$ and at most $2002$. As there are $2003$ remainder (one for each of the first $2003$ elements of the sequence), and only $2002$ possible values for these remainders, it follows by the Pigeonhole Principle that there are two elements out of the first $2003$ that have the same remainder. Let us say that the $i$th and $j$th elements of the sequence $a_i$ and $a_j$, have this property, and let $ i &lt; j $. Consider the following difference: $$ \underbrace {777 \cdots 7}_{j \, \text{digits}} - \underbrace {77 \cdots 7}_{i \, \text{digits}} = \underbrace {7 \cdots 7}_{j-i \, \text{sevens}}\underbrace{0 \cdots 0}_{i \, \text{zeroes}}. $$As $a_i$ and $a_j$ have the same remainder when divided by $2003$, there exist non-negative integers $k_i$, $k_j$, and $r$ so that $ r \le 2002 $, and $ a_i = 2003k_i + r $ and $ a_j = 2003k_j + r $. This shows that $ a_j - a_i = 2003 \cdot (k_j - k_i) $, so in particular, $a_j-a_i$ is divisible by $2003$. </p> <p>This is nice, but we need to show that there is an element in our sequence that is divisible by $2003$, and $a_j-a_i$ is not an element in our sequence. However, the centered text above is very useful. </p> <p>Indeed, $a_j-a_i$ consists of $j-i$ digits equal to $7$, when $i$ digits equal to $0$. In other words, $$ a_j - a_i = a_{j-1} \cdot 10^i, $$ and the proof follows as $ 10^i $ is relatively primes to $2003$, so $a_{j-1}$ must be divisible by $2003$. </p> <p>$ \blacksquare $ </p> <p>Can you generalize? Also, see <a href="https://en.wikipedia.org/wiki/Repunit" rel="nofollow"><strong>here</strong></a>. </p>
1,245,651
<p>In algebra, I learned that if <span class="math-container">$\lambda$</span> is an eigenvalue of a linear operator <span class="math-container">$T$</span>, I can have <span class="math-container">\begin{equation} Tx = \lambda x \tag{1} \end{equation}</span> for some <span class="math-container">$x\neq 0$</span>, which is equivalent to <span class="math-container">$\lambda I-T$</span> not being invertible.</p> <p>In functional analysis, it is said that if <span class="math-container">$\lambda$</span> is an element of a spectrum of the linear operator <span class="math-container">$T$</span>, then <span class="math-container">$\lambda I - T$</span> is not invertible. However, my Professor never mentioned <span class="math-container">$(1)$</span>.</p> <p>Is the definition/concept in functional analysis the same as <span class="math-container">$(1)$</span> in linear algebra? Can I use <span class="math-container">$(1)$</span> in functional analysis too? Does it depend on which spaces we are in?</p> <p>For example, suppose <span class="math-container">$\lambda$</span> is in the spectrum of <span class="math-container">$T$</span>, where <span class="math-container">$T$</span> is a linear operator on <span class="math-container">$E$</span>, a Banach space. I want to show <span class="math-container">$\lambda^n$</span> is in the spectrum of <span class="math-container">$T^n$</span>. Would this problem is equivalent to showing if <span class="math-container">$\lambda$</span> is an eigenvalue of a linear operator <span class="math-container">$T$</span>, then <span class="math-container">$\lambda^n$</span> is an eigenvalue of <span class="math-container">$T^n$</span>?</p> <p>Thank you.</p>
cfh
164,698
<p>Spectral theory in infinite-dimensional spaces is quite a bit more complicated than in the finite-dimensional case. In particular, we have to distinguish between the spectrum $\sigma(A)$ of an operator and its eigenvalues. Let $A$ be a linear operator on a Banach space $X$ over the scalar field $C$. We have $$ \sigma(A) = \{ \lambda \in C: (\lambda I - A) \text{ does not have a bounded inverse} \}. $$ An eigenvalue $\lambda$ of $A$ is a value such that there exists a nonzero eigenvector $x \in X$ such that $$ A x = \lambda x, $$ or equivalently, $\operatorname{ker}(\lambda I - A) \neq \emptyset$. We then call $\dim \operatorname{ker}(\lambda I - A)$ the geometric multiplicity of the eigenvalue $\lambda$.</p> <p>An eigenvalue is always in the spectrum, as you can see from the definition, but not every element of the spectrum is an eigenvalue in general.</p> <p>In increasing order of "complicatedness", we could say:</p> <ul> <li><strong>Matrices</strong> (linear bounded operators on finite-dimensional vector spaces): the spectrum is finite, and each of its elements is an eigenvalue.</li> <li><strong>Compact self-adjoint operators on a Hilbert space</strong>: almost as nice as matrices. The spectrum is a compact set and countable, and it is contained in the reals. Every nonzero element of the spectrum is an eigenvalue with finite multiplicity. There is a spectral decomposition of the operator much as one would have for a matrix.</li> <li><strong>Bounded operators</strong>: the spectrum is still compact, but may be uncountable. In fact, for any compact set in $C$, you can find an operator which has this set as its spectrum. Yet it's still quite possible that no element of the spectrum is an eigenvalue; see the example of T.A.E. given in another answer.</li> <li><strong>Unbounded operators</strong>: the spectrum is in general unbounded.</li> </ul> <p>This list could of course be refined with more specific conditions. I'm still studying the theory myself and will go back and add details as I learn about them. (Suggestions are welcome.) If you want to learn more, I found some relatively digestible <a href="http://www.math.ethz.ch/~kowalski/spectral-theory.pdf" rel="noreferrer">lecture notes</a> by E. Kowalski, ETH Zürich.</p>
3,583,330
<p>I've approached the problem the following way : </p> <p>Out of the 7 dice, I select any 6 which will have distinct numbers : 7C6.</p> <p>In the 6 dice, there can be 6! ways in which distinct numbers appear.</p> <p>And lastly, the last dice will have 6 possible ways in which it can show a number.</p> <p>So the required answer should be : 7C6 * 6! * 6/(6^7) which on simplifying becomes : 70/(6^3 * 3).</p> <p>However, the answer given is 35/(6^3 * 3).</p> <p>Where exactly am I going wrong?</p>
Vincent
101,420
<p>You probably noticed that your answer differs from the correct answer by a factor 2, so apparently you count everything twice.</p> <p>Suppose your dice are labeled A, B, C, D, E, F, G and you throw:</p> <p>A:1</p> <p>B:2</p> <p>C: 3</p> <p>D:4</p> <p>E: 5</p> <p>F: 6</p> <p>G: 1</p> <p>Then you count this throw twice: one time with ABCDEF as the 'special' dice showing 6 different figures and G as the redundant die, and once with BCDEFG as the special dice showing 6 different figures and A as the redundant die.</p>
4,279,076
<p>I have seen in wikipedia that irrational numbers have infinite continued fraction but I also found <span class="math-container">$$1=\frac{2}{3-\frac{2}{3-\ddots}}$$</span> so my question is that does that mean <span class="math-container">$1$</span> is irrational because it can be written as an infinite continued fraction?</p>
jjagmath
571,433
<p>The theorem about irrationals and and infinite continued fractions is for <strong>simple</strong> continued fractions. See <a href="https://en.m.wikipedia.org/wiki/Continued_fraction" rel="nofollow noreferrer">here</a></p>
33,387
<p>I was told the following "Theorem": Let $y^{2} =x^{3} + Ax^{2} +Bx$ be a nonsingular cubic curve with $A,B \in \mathbb{Z}$. Then the rank $r$ of this curve satisfies</p> <p>$r \leq \nu (A^{2} -4B) +\nu(B) -1$</p> <p>where $\nu(n)$ is the number of distinct positive prime divisors of $n$.</p> <p>I can not find a name for this theorem or a reference, and I am wondering if it is a well known result, or if it is even true. Has anyone seen this result or have a suggestion on where I can find a reference. Thank you.</p>
GeoffDS
8,671
<p>It's in Alvaro Lozano-Robledo's book. In fact, you can find it online.</p> <p><a href="http://www.math.uic.edu/~wgarci4/pcmi/PCMI_Lectures.pdf" rel="nofollow">http://www.math.uic.edu/~wgarci4/pcmi/PCMI_Lectures.pdf</a></p> <p>It's Theorem 2.6.4 on page 42.</p>
33,387
<p>I was told the following "Theorem": Let $y^{2} =x^{3} + Ax^{2} +Bx$ be a nonsingular cubic curve with $A,B \in \mathbb{Z}$. Then the rank $r$ of this curve satisfies</p> <p>$r \leq \nu (A^{2} -4B) +\nu(B) -1$</p> <p>where $\nu(n)$ is the number of distinct positive prime divisors of $n$.</p> <p>I can not find a name for this theorem or a reference, and I am wondering if it is a well known result, or if it is even true. Has anyone seen this result or have a suggestion on where I can find a reference. Thank you.</p>
Álvaro Lozano-Robledo
14,699
<p>Two co-authors and I included a proof of this fact in <a href="http://alozano.clas.uconn.edu/wp-content/uploads/sites/490/2014/01/ALP-2-23-07.pdf" rel="nofollow">our paper</a>, in order to make our article self-contained (but we do not claim to be the first ones to point this out). As Pete Clark explains, it follows easily from the method of descent via 2-isogeny.</p>
1,533,646
<p>Dying someone appointed in the will the following: If his pregnant wife giving birth to a son , then she will inherit 1/3 of the estate and his son 2/3 . If giving birth to daughter , then she would inherit 2/3 of the property and the daughter 1/3 . The woman gave birth to twins after the death of her husband , a boy and a girl .How will be distributed the father's estate?</p> <p>Any ideas or hints?</p>
hmakholm left over Monica
14,366
<p>Classical and intuitionistic propositional logic do not prove the same formulas, even in the purely implicational fragment.</p> <p>Most famously, <em>Peirce's Law</em> $((P\to Q)\to P)\to P$ is a classical tautology, but is not intuitionistically valid. (That is, classical logic proves it, but intuitionistic logic doesn't).</p> <hr> <p>The two logics <em>are</em> equivalent for the $\{\land,\lor\}$ fragment, though. In terms of which formulas are theorem of the pure calculus, this is not very interesting (because <em>no</em> formula in the $\{\land,\lor\}$ fragment are theorems), but it also holds if you consider non-empty theories: Classical and intuitionistic <em>entailment</em> coincide for this fragment.</p> <p>See <a href="https://math.stackexchange.com/questions/561020/minimal-difference-between-classical-and-intuitionistic-sequent-calculus">this question</a> which shows that the only change to the classical sequent calculus LK that is necessary to get intuitionistic logic instead is to the ${\to}R$ rule. However a cut-free proof in the sequent calculus never uses rules for connectives that don't appear in the conclusion, so the valid (cut-free) proofs in the classical LK for conclusions in the $\{\land,\lor\}$ fragment are the same as the valid proofs in the intuitionistic variant.</p>
349,177
<p>I have the following functional equation: <span class="math-container">$$f(a+b)=f^{(n-1)}(a)f(b)+f^{(n-2)}(a)f^{(1)}(b)+\dots+f(a)f^{(n-1)}(b)=\sum_{k=0}^{n-1}f^{(n-1-k)}(a)f^{(k)}(b)$$</span> where <span class="math-container">$a,b$</span> can be any complex numbers, <span class="math-container">$f$</span> is an entire function and <span class="math-container">$n\in\mathbb N$</span>. <span class="math-container">$f^{(k)}(x)$</span> denotes the <span class="math-container">$k$</span>-th derivative of <span class="math-container">$f$</span> at <span class="math-container">$x$</span>.</p> <p>What is the usual way to solve these problems?</p>
Community
-1
<p>If $n=1$, we have $f(a+b) = f(a)f(b)$. If $f$ is assume to be continuous, then we have $f(x) = e^{kx}$.</p> <p>If $n=2$, we have $f(a+b) = f'(a)f(b) + f(a) f'(b)$. Taking a cue from the above, if we let $f(x) = ce^{kx}$, we get $$ce^{k(a+b)} = 2c^2ke^{k(a+b)}$$ This gives us $2ck = 1 \implies c = \dfrac1{2k}$.</p> <p>If $n=3$, we have $f(a+b) = f^2(a)f(b) + f'(a) f'(b) + f(a) f^2(b)$. As above, if we let $f(x) = ce^{kx}$, we get $$ce^{k(a+b)} = (c^2k^2 + c^2k^2+c^2k^2)e^{k(a+b)}$$ This gives us $3ck^2 = 1 \implies c = \dfrac1{3k^2}$.</p> <p>Hence, in general, $$f(x) = \dfrac{e^{kx}}{nk^{n-1}}$$ satisfies the recurrence. This is one possible solution. You could try $f(x) = P_{n-1}(x)e^{kx}$ where $P_{n-1}(x)$ is a polynomial of degree $n-1$ and try to obtain some constraint on its coefficients.</p>
349,177
<p>I have the following functional equation: <span class="math-container">$$f(a+b)=f^{(n-1)}(a)f(b)+f^{(n-2)}(a)f^{(1)}(b)+\dots+f(a)f^{(n-1)}(b)=\sum_{k=0}^{n-1}f^{(n-1-k)}(a)f^{(k)}(b)$$</span> where <span class="math-container">$a,b$</span> can be any complex numbers, <span class="math-container">$f$</span> is an entire function and <span class="math-container">$n\in\mathbb N$</span>. <span class="math-container">$f^{(k)}(x)$</span> denotes the <span class="math-container">$k$</span>-th derivative of <span class="math-container">$f$</span> at <span class="math-container">$x$</span>.</p> <p>What is the usual way to solve these problems?</p>
achille hui
59,379
<p>Let $\partial_a$ and $\partial_b$ stand for the shorthand of $\frac{\partial}{\partial a}$ and $\frac{\partial}{\partial b}$. The functional equation can be rewritten as:</p> <p>$$f(a+b) = \left(\partial_a^{n-1} + \partial_a^{n-2}\partial_b + \cdots + \partial_a\partial_b^{n-2} + \partial_b^{n-1}\right)(f(a)f(b))$$</p> <p>Apply $\partial_a - \partial_b$ on both sides, we get:</p> <p>$$ 0 = (\partial_a - \partial_b)f(a+b) = \left(\partial_a^n - \partial_b^n\right)(f(a)f(b))$$</p> <p>which is equivalent to $$(\partial_a^n f(a)) f(b) - f(a)(\partial_b^{n}f(b)) = 0$$ Since $a$ is independent of $b$, this implies the existence of a constant $\lambda$ such that</p> <p>$$\frac{d^n}{d a^{n}} f(a) = \lambda^{n}f(a)\tag{*1}$$</p> <p>Let us look at the case $\lambda \ne 0$. Let $\omega$ be a primitive $n^{th}$ root of unity. A general solution for $(*1)$ has the form:</p> <p>$$f(a) = \sum_{j=0}^{n-1} A_j e^{\lambda\,\omega^j a}$$</p> <p>where $A_j, j = 0,\ldots,n-1$ are constants. Substitute this back into the functional equation, we get:</p> <p>$$\sum_{j=0}^{n-1} A_j e^{\lambda\,\omega^j (a+b)} =\sum_{j=0}^{n-1} \sum_{k=0}^{n-1} ( A_j e^{\lambda\,\omega^j a})( A_k e^{\lambda\,\omega^k b})\left( \sum_{l=0}^{n-1} (\lambda\,\omega^j)^l (\lambda\,\omega^k)^{n-1-l} \right) \tag{*2}$$ The last factor in R.H.S of $(*2)$ is given by: $$\sum_{l=0}^{n-1} (\lambda\,\omega^j)^l (\lambda\,\omega^k)^{n-1-l} = \lambda^{n-1}\omega^{-k}\sum_{l=0}^{n-1}\omega^{(j-k)l} = n\lambda^{n-1} \omega^{-k} \delta_{jk}$$</p> <p>where $\delta_{jk}$ is the Kronecker delta. One can simplify $(*2)$ to:</p> <p>$$\sum_{j=0}^{n-1} A_j e^{\lambda\,\omega^j (a+b)} = \sum_{j=0}^{n-1} n \lambda^{n-1} \omega^{-j} A_j^2 e^{\lambda\,\omega^j(a + b)}$$</p> <p>This implies $A_j$ is either $0$ or $\frac{\omega^j}{n\lambda^{n-1}}$. For $\lambda \ne 0$, there are $2^{n}-1$ non-zero solutions for the functional equation:</p> <p>$$f(a) = \frac{1}{n\lambda^{n-1}} \sum_{j=0}^{n-1} \epsilon_j \omega^j e^{\lambda\,\omega^j a}\tag{*3}$$</p> <p>where $\epsilon_j \in \{0,1\}$ for $j = 0, \ldots, n-1$ and not all zero.</p> <p>Consider the special case all $\epsilon_j = 1$. If one expand R.H.S of $(*3)$ as a Laurent series at $\lambda = 0$. It is not hard to see all the pole in $\lambda$ cancel out. This give us a solution of the functional equation for $\lambda = 0$, namely:</p> <p>$$f(a) = \lim_{\lambda\to 0}\frac{1}{n\lambda^{n-1}} \sum_{j=0}^{n-1} \omega^j e^{\lambda\,\omega^j a} = \frac{a^{n-1}}{(n-1)!}\tag{*4}$$</p> <p><strong>EDIT</strong></p> <p>In fact, this is the only non-zero solution for $\lambda = 0$. Apply $\partial_a$ to both sides of the functional equation, we get:</p> <p>$$\begin{align}f'(a+b) &amp;= \partial_a f(a+b)\\ &amp;= \partial_a^n f(a)f(b) + (\partial_a^{n-1}\partial_b + \cdots + \partial_a\partial_b^{n-1})(f(a)f(b))\\ &amp;= (\partial_a^{n-2} + \partial_a^{n-3}\partial_b + \cdots + \partial_b^{n-2})(f'(a)f'(b)) \end{align}$$</p> <p>Notice $\partial_a^n f(a) = 0 \implies \partial_a^{n-1} f'(a) = 0$. This means for any $\lambda = 0$ solution $f(\cdot)$ of the functional equation for $n-1$. $f'(\cdot)$ is a $\lambda = 0$ solution for the functional equation for $n-2$.</p> <p>If we have shown $(*4)$ is the only non-zero $\lambda = 0$ solution for $n &lt; N$, then for $n = N$, </p> <p>$$f'(a) = \frac{a^{N-2}}{(N-2)!} \;\;\text{ or }\;\; 0 \implies f(a) = \frac{a^{N-1}}{(N-1)!} + c\;\;\text{ or }\;\;c$$ for some constant $c$. Plug this into the functional equation for $n = N$, $c$ need to satisfy:</p> <p>$$c = (\partial_{a}^{N-1}f(a)) c + c(\partial_b^{N-1}f(b)) = 2 c\;\;\text{ or }\;\;0$$</p> <p>In both cases, $c = 0$ and $(*4)$ is indeed the only non-zero $\lambda = 0$ solution.</p>
3,498,809
<p>Consider Ito's lemma in the following standard version <span class="math-container">$$h(W_t) = h(W_0) + \int_0^t \nabla h(W_s) dW_s + \frac{1}{2} \int_0^t \Delta h(W_s) ds.$$</span></p> <p>I am asking myself under which conditions, the deterministic time <span class="math-container">$t$</span> can be replaced by <span class="math-container">$t \wedge \tau$</span>, where <span class="math-container">$\tau$</span> is a stopping time. Does anybody have an idea?</p>
zhoraster
262,269
<p>Under the same conditions under which the It&ocirc; formula is valid. Indeed, the process <span class="math-container">$X_s = W_{\tau \wedge s}$</span> is an It&ocirc; process with stochastic differential <span class="math-container">$dX_s = \mathbf{1}_{[0,\tau]}(s) dW_s$</span> (see e.g. our <a href="https://www.wiley.com/en-us/Theory+and+Statistical+Applications+of+Stochastic+Processes-p-9781786300508" rel="nofollow noreferrer">book</a> with Yuliya Mishura, Theorem 8.4). Then, using the It&ocirc; formula and this "locality" property once more, <span class="math-container">$$ h(X_t) = h(X_0) + \int_0^t \nabla h(X_s)\mathbf{1}_{[0,\tau]}(s) dW_s + \frac{1}{2} \int_0^t \Delta h(X_s) \mathbf{1}_{[0,\tau]}(s)^2 ds \\ = h(W_0) + \int_0^{t \wedge \tau} \nabla h(W_s) dW_s + \frac{1}{2} \int_0^{t \wedge \tau} \Delta h(W_s) ds. $$</span></p>
3,498,809
<p>Consider Ito's lemma in the following standard version <span class="math-container">$$h(W_t) = h(W_0) + \int_0^t \nabla h(W_s) dW_s + \frac{1}{2} \int_0^t \Delta h(W_s) ds.$$</span></p> <p>I am asking myself under which conditions, the deterministic time <span class="math-container">$t$</span> can be replaced by <span class="math-container">$t \wedge \tau$</span>, where <span class="math-container">$\tau$</span> is a stopping time. Does anybody have an idea?</p>
John Dawkins
189,130
<p>If (as is customary) the stochastic integral is understood to be continuous in <span class="math-container">$t$</span> (a.s.) then the equality holds for all <span class="math-container">$t$</span> simultaneously, with probability 1. As such, <span class="math-container">$t$</span> can be replaced throughout by <em>any</em> non-negative random variable and the a.s. equality will persist.</p>
2,291,540
<p>Is it possible to have a sequence of continuous functions $\{f_n\}_{n=1}^\infty$ on $[a,b]$ that converges uniformly to a function $f$ but $f$ is not bounded on $[a,b]$?</p>
Affineline
448,123
<p>Uniform limit of continuous functions is necessarily continuous. Also, continuous functions on an interval are necessarily bounded. Combining the two we get that there doesn't exist such a function as required. </p>
9,696
<p>I am tutoring a Grade 2 girl in arithmetic. She has demonstrated an ability to add two-digit numbers with carrying. For example: </p> <p>$$\;\;14\\ +27\\ =41$$ </p> <p>I asked her to write this out horizontally, and this is what she produced. </p> <p>$$12+47=41$$ </p> <p>She evidently is failing to see the numbers and is confounding the vertical addition of the digits with the horizontal reading of the numbers. </p> <p>With practice, and prompting, she is able to get this right, but it seems as though she sees the sum as a matrix of four digits, and is missing the <em>numbers</em>. </p> <p>Any insights on how to help her?</p>
Amy B
5,321
<p>Does she have difficulty with reading or other taking in other visual information? Her problem might have nothing to do with understanding numbers and everything to do with a learning difference in how she perceives visual information. I've had students who could answer questions but couldn't handle a worksheet (with the same questions) if it was organized in an unusual way. You might check with the child's teacher to see if this is an issue and/or if she has an IEP (Individualised Education Plan). </p> <p>Note that the reason that I suspect this, is that she seems to understand the numbers when you question her. I suggest you rule out learning issues and continue to reinforce the different formats using one and tens as Gerhard suggested in his comment. </p>
2,625,763
<p>I am having trouble with factoring $2x^3 + 21x^2 +27x$. The answer is $x(x+9)(2x+3)$ but not sure how that was done. Obviously I factored out the $x$ to get $x(2x^2+21x+27)$ then from there I am lost. I tried the AC method and grouping. Can someone show the steps? Thanks! </p>
Mostafa Ayaz
518,023
<p>$$2x^3+21x^2+27x\\=x(2x^2+21x+27)\\=\dfrac{x}{2}(4x^2+42x+54)\\=\dfrac{x}{2}(2x+18)(2x+3)\\=x(x+9)(2x+3)$$for factorizing $4x^2+42x+54$ we know that it must in form of $(2x+a)(2x+b)$. So:$$4x^2+42x+54=(2x+a)(2x+b)=4x^2+(a+b)2x+ab$$which implies$$ab=54\\a+b=21$$clearly the only numbers satisfying those equalities are $a=18$ and $b=3$ or vice versa.</p> <p>Another way to attain such numbers when guessing them is not that easy is to solve the following equation$$4x^2+42x+54=0$$or$$2x^2+21x+27=0$$which leads to $$x=\dfrac{-21\pm\sqrt{{21^2-4\times 2\times 27}}}{4}=\dfrac{-21\pm 15}{4}$$therefore$$x_1=\dfrac{-21-15}{4}=-9\\x_2=-\dfrac{3}{2}$$so$$2x^2+21x+27=2(x+9)(x+\dfrac{3}{2})=(x+9)(2x+3)$$</p>
2,625,763
<p>I am having trouble with factoring $2x^3 + 21x^2 +27x$. The answer is $x(x+9)(2x+3)$ but not sure how that was done. Obviously I factored out the $x$ to get $x(2x^2+21x+27)$ then from there I am lost. I tried the AC method and grouping. Can someone show the steps? Thanks! </p>
Arnav Borborah
520,392
<p>Quite simple; The AC Method <em>can</em> be used here:</p> <p>$$2x^3 + 21x^2 +27x =$$ $$x(2x^2 + 21x +27) =$$ Now we have to find two factors that multiply to $54$ ($2 \times 27$), and add up to $21$. Two such numbers are $3$ and $18$, which are now used to split apart the polynomial. $$x((2x^2 + 3x) + (18x + 27)) =$$ From here it is simple grouping. $$x(x(2x + 3) + 9(2x + 3)) =$$ $$x(x + 9)(2x + 3)$$</p>
3,473,944
<p>So i have an object that moves in a straight line with initial velocity <span class="math-container">$v_0$</span> and starting position <span class="math-container">$x_0$</span>. I can give it constant acceleration <span class="math-container">$a$</span> over a fixed time interval <span class="math-container">$t$</span>. Now what i need is that when the time interval ends this object should stop exactly at a point <span class="math-container">$x_1$</span> with it's velocity being equal to <span class="math-container">$0$</span>. I need to find acceleration <span class="math-container">$a$</span> that i can give it in order for that to happen. </p> <p>The way i see it we've got a system of equations: <span class="math-container">$$ 0 = v_0 + a t $$</span> <span class="math-container">$$ x_1 = x_0 + v_0 t + \frac {a t^2} {2} $$</span> </p> <p>I have only one unknown, which is <span class="math-container">$a$</span>. </p> <p>Let's get <span class="math-container">$a$</span> from the first equation: <span class="math-container">$$ a = \frac { - v_0 } { t } $$</span> </p> <p>And put it into the second one: <span class="math-container">$$ x_1 = x_0 + v_0 t + \frac { - v_0 t } {2} $$</span> </p> <p>Now let's express initial velocity (<span class="math-container">$v_0$</span>) from that equation: <span class="math-container">$$ x_1 - x_0 = v_0 t + \frac { - v_0 t } {2} $$</span> <span class="math-container">$$ \frac { x_1 - x_0 } { t } = v_0 + \frac { - v_0 } {2} $$</span> <span class="math-container">$$ \frac { 2 ( x_1 - x_0 ) } { t } = 2 v_0 - v_0 $$</span> <span class="math-container">$$ v_0 = \frac { 2 ( x_1 - x_0 ) } { t } $$</span> </p> <p>And put it back into equation for acceleration: <span class="math-container">$$ a = \frac { - v_0 } { t } $$</span> <span class="math-container">$$ a = \frac { - \frac { 2 ( x_1 - x_0 ) } { t } } { t } $$</span> <span class="math-container">$$ a = - \frac { 2 ( x_1 - x_0 ) } { t^2 } $$</span> </p> <p>So we got an acceleration that i need to apply to an object over a time interval <span class="math-container">$t$</span>, so that it would stop at <span class="math-container">$x_1$</span> with velocity <span class="math-container">$0$</span>, right? </p> <p>But it doesn't work! </p> <p>Because it doesn't depend on initial velocity at all! So if my object is flying at 2 m/s then i would need to apply the same acceleration as if it was flying 100 m/s, or 1000 m/s? How come? </p> <p>Where am i being wrong? This all seems mathematically sound... Am i setting the wrong premises? Interpreting results in the wrong way? </p> <p>I really need it for my project, and i've been trying to solve this for weeks, studying different aspects of maths that might help me, but i just can't do it :( </p> <p>But this looks so simple! And yet i just can't do it. 11 years of school seem so useless right now... </p> <p>Help please </p>
Manuel Pena
341,519
<p>You are impossing to much things. Think about it this way. If you begin at speed <span class="math-container">$v_0$</span> and pretend to have a constant acceleration such that, in a time <span class="math-container">$t$</span> you reach <span class="math-container">$0$</span> speed, then this acceleration must be: <span class="math-container">$$ a =\frac{0-v_0}{t} $$</span> There you have the dependency of <span class="math-container">$a$</span> with <span class="math-container">$v_0$</span>, but you won't be able to make it to stop at a point <span class="math-container">$x_1$</span>, that is, the stopping point would be part of the solution: <span class="math-container">$$ x_1 = x_0 + v_0t + \frac{1}{2}at^2 $$</span></p> <p>The same way, if you really want to stop at a point <span class="math-container">$x_1$</span> in a time <span class="math-container">$t$</span>, deccelerating at a constant rate, then you get a formula for <span class="math-container">$a$</span> that does not depend on <span class="math-container">$v_0$</span>, as <span class="math-container">$v_0$</span> would be part of the solution: <span class="math-container">$$ v_0 = 0-at $$</span></p> <p>(Finally you could impose the initial velocity <span class="math-container">$v_0$</span> and the stopping point <span class="math-container">$x_1$</span>, but then the time needed would be part of the solution)</p>
3,009,543
<p>I am having great problems in solving this:</p> <p><span class="math-container">$$\lim\limits_{n\to\infty}\sqrt[3]{n+\sqrt{n}}-\sqrt[3]{n}$$</span></p> <p>I am trying to solve this for hours, no solution in sight. I tried so many ways on my paper here, which all lead to nonsense or to nowhere. I concluded that I have to use the third binomial formula here, so my next step would be:</p> <p><span class="math-container">$$a^3-b^3=(a-b)(a^2+ab+b^2)$$</span> so </p> <p><span class="math-container">$$a-b=\frac{a^3-b^3}{a^2+ab+b^2}$$</span></p> <p>I tried expanding it as well, which led to absolutely nothing. These are my writings to this:</p> <p><a href="https://i.stack.imgur.com/FyJ8t.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FyJ8t.jpg" alt="enter image description here"></a></p>
user
505,767
<p>By first order binomial expansion <span class="math-container">$(1+x)^r=1+rx + o(x)$</span>, we have</p> <p><span class="math-container">$$\sqrt[3]{n+\sqrt{n}}=\sqrt[3]{n}\, \left(1+\frac1{\sqrt n}\right)^\frac13=\sqrt[3]{n}+\frac{\sqrt[3]{n}}{3\sqrt n}+o\left(\frac{\sqrt[3]{n}}{\sqrt n}\right)=\sqrt[3]{n}+\frac{1}{3n^\frac16}+o\left(\frac{1}{n^\frac16}\right)$$</span></p> <p>therefore</p> <p><span class="math-container">$$\sqrt[3]{n+\sqrt{n}}-\sqrt[3]{n}=\frac{1}{3n^\frac16}+o\left(\frac{1}{n^\frac16}\right)$$</span></p>
3,009,543
<p>I am having great problems in solving this:</p> <p><span class="math-container">$$\lim\limits_{n\to\infty}\sqrt[3]{n+\sqrt{n}}-\sqrt[3]{n}$$</span></p> <p>I am trying to solve this for hours, no solution in sight. I tried so many ways on my paper here, which all lead to nonsense or to nowhere. I concluded that I have to use the third binomial formula here, so my next step would be:</p> <p><span class="math-container">$$a^3-b^3=(a-b)(a^2+ab+b^2)$$</span> so </p> <p><span class="math-container">$$a-b=\frac{a^3-b^3}{a^2+ab+b^2}$$</span></p> <p>I tried expanding it as well, which led to absolutely nothing. These are my writings to this:</p> <p><a href="https://i.stack.imgur.com/FyJ8t.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FyJ8t.jpg" alt="enter image description here"></a></p>
User
125,635
<p>Consider the function <span class="math-container">$f(x)=x^{1/3}$</span>. By the mean value theorem there's a number <span class="math-container">$y\in (n, n+\sqrt n)$</span> such that <span class="math-container">$$ f(n+\sqrt n) - f(n) = f'(y)(n+\sqrt n - n)= \frac{y^{-2/3}}{3}\sqrt n&lt;n^{-2/3}\sqrt n=n^{-1/6}\to 0. $$</span></p>
1,355,901
<p>Let $A$ be the set of all integers $x$ such that $x = 2k$ for some integer $k$</p> <p>Let $B$ be the set of all integers $x$ such that $x = 2k+2$ for some integer $k$</p> <p>Give a formal proof that $A = B$.</p>
user2034716
135,232
<p>It suffices to show that both $A\subset B$ and $B\subset A$.</p> <p>Fix $x \in A$. Then we can write $x=2k$ for some $k$, which means we could also write $x=2k=2k-2+2=2(k-1)+2$. Thus $x\in B$, because for $k'=k-1$ we can write $x=2k'+2$.</p> <p>This proves one direction, that $A\subset B$. The other direction is nearly identical.</p>
555,955
<p>Suppose I have two doors. One of them has a probability of $1/9$ to contain X, the other has a probability of $2/3$ to contain X. Then, supposing I pick randomly one of the two doors, what is the probability that it contains X?</p> <p>(If one contains X, the other can also contain X. They are independent but not mutually exclusive.)</p> <p>I'm not sure what the solution is - is it just the average of the probabilities? I need this as a stepping stone in a larger argument. Thanks.</p>
Tom
103,715
<p><em>Hint:</em> Let $E$ be the event that the door you open contains $X$. Assuming that you must choose either door $1$ or door $2$, but not both:</p> <p>$$ P(E) = P(E~|~\text{choose door } 1)P(\text{choose door} 1) + P(E~|~\text{choose door } 2)P(\text{choose door} 2) $$</p>
3,074,668
<p>Good evening,</p> <p>Could someone please demonstrate why this property is valid?</p> <blockquote> <p>Given <span class="math-container">$\sigma\in S_n$</span></p> <p><span class="math-container">$$\left|\prod_{i&lt;j} \frac{\sigma(j)-\sigma(i)}{j-i}\right|=1$$</span></p> </blockquote>
darij grinberg
586
<p>Detailed proof: See Exercise 5.13 <strong>(a)</strong> in <a href="https://github.com/darijgr/detnotes/releases/2019-01-10" rel="nofollow noreferrer">my <em>Notes on the combinatorial fundamentals of algebra</em>, 10th of January 2019</a>. The claim I prove there is more general: I show that if <span class="math-container">$x_1, x_2, \ldots, x_n$</span> are any <span class="math-container">$n$</span> complex numbers, and if <span class="math-container">$\sigma$</span> is any permutation of <span class="math-container">$\left\{1,2,\ldots,n\right\}$</span>, then <span class="math-container">\begin{equation} \prod_{i &lt; j} \left(x_{\sigma\left(i\right)} - x_{\sigma\left(j\right)}\right) = \left(-1\right)^{\sigma} \cdot \prod_{i &lt; j} \left(x_i - x_j\right) , \label{darij1.eq.1} \tag{1} \end{equation}</span> where <span class="math-container">$\left(-1\right)^{\sigma}$</span> denotes the sign of the permutation <span class="math-container">$\sigma$</span>. In order to obtain your equation from \eqref{darij1.eq.1}, you have to set <span class="math-container">$x_i = i$</span> and take absolute values (so that the sign <span class="math-container">$\left(-1\right)^{\sigma}$</span> disappears, since its absolute value is <span class="math-container">$1$</span>).</p> <p>Let me sketch how to quickly prove the weaker equality <span class="math-container">\begin{equation} \left|\prod_{i &lt; j} \left(x_{\sigma\left(i\right)} - x_{\sigma\left(j\right)}\right)\right| = \left|\prod_{i &lt; j} \left(x_i - x_j\right)\right| \label{darij1.eq.2} \tag{2} \end{equation}</span> (which is still sufficient for your purposes). This is what @NickPeterson has already suggested, but my more rigorous notations shall hopefully close the cracks which let confusion slip through.</p> <p>First of all, the absolute value of a product equals the product of the absolute values of the factors; thus, <span class="math-container">\begin{align} \left|\prod_{i &lt; j} \left(x_{\sigma\left(i\right)} - x_{\sigma\left(j\right)}\right)\right| = \prod_{i &lt; j} \left| x_{\sigma\left(i\right)} - x_{\sigma\left(j\right)} \right| . \end{align}</span></p> <p>Next, let <span class="math-container">$P$</span> be the set of all pairs <span class="math-container">$\left(i, j\right)$</span> of integers <span class="math-container">$i, j \in \left\{1,2,\ldots,n\right\}$</span> satisfying <span class="math-container">$i &lt; j$</span>; also, let <span class="math-container">$G$</span> be the set of all <span class="math-container">$2$</span>-element subsets of <span class="math-container">$\left\{1,2,\ldots,n\right\}$</span>. Note that the product sign "<span class="math-container">$\prod\limits_{i &lt; j}$</span>" is equivalent to "<span class="math-container">$\prod\limits_{\left(i, j\right) \in P}$</span>".</p> <p>The two sets <span class="math-container">$P$</span> and <span class="math-container">$G$</span> have the same size (namely, <span class="math-container">$\dbinom{n}{2} = n\left(n-1\right) / 2$</span>), and this is no coincidence: There is a bijection from <span class="math-container">$P$</span> to <span class="math-container">$G$</span>. This bijection simply maps each pair <span class="math-container">$\left(i, j\right)$</span> to the two-element set <span class="math-container">$\left\{i, j\right\}$</span>. The inverse of this bijection maps each two-element set to the pair consisting of its smaller element and its larger element (in this order).</p> <p>The permutation <span class="math-container">$\sigma$</span> of <span class="math-container">$\left\{1,2,\ldots,n\right\}$</span> gives rise to a permutation <span class="math-container">$\sigma_*$</span> of the set <span class="math-container">$G$</span>, which sends each two-element subset <span class="math-container">$I$</span> to <span class="math-container">$\sigma\left(I\right)$</span> (in other words, it sends each two-element subset <span class="math-container">$\left\{i,j\right\}$</span> to <span class="math-container">$\left\{\sigma\left(i\right), \sigma\left(j\right)\right\}$</span>). Why is this a permutation of <span class="math-container">$G$</span>? Well, again, its inverse is easy to find (it does the same thing, just with <span class="math-container">$\sigma^{-1}$</span> instead of <span class="math-container">$\sigma$</span>). So <span class="math-container">$\sigma_*$</span> is a permutation of <span class="math-container">$G$</span>, i.e., a bijection from <span class="math-container">$G$</span> to <span class="math-container">$G$</span>.</p> <p>Now the crucial insight: If <span class="math-container">$\left(i, j\right) \in P$</span>, then the absolute value <span class="math-container">$\left| x_i - x_j \right|$</span> depends only on the set <span class="math-container">$\left\{i, j\right\} \in G$</span> (not on the pair <span class="math-container">$\left(i, j\right) \in P$</span>). In other words, if <span class="math-container">$I \in G$</span> is any two-element subset, then we can define a real number <span class="math-container">$f_I$</span> by setting <span class="math-container">\begin{align} f_I = \left| x_i - x_j \right|, \qquad \text{ where $I$ is written as $I = \left\{i, j\right\}$}. \end{align}</span> In order to formally prove this, you should recall that there are exactly two ways of writing <span class="math-container">$I$</span> as <span class="math-container">$I = \left\{i, j\right\}$</span>, and check that these two ways lead to the same value of <span class="math-container">$\left| x_i - x_j \right|$</span> (easy: these two ways only differ in the order of elements, and we have <span class="math-container">$\left| x_a - x_b \right| = \left| x_b - x_a \right|$</span>).</p> <p>Note that every <span class="math-container">$\left(i, j\right) \in P$</span> satisfies <span class="math-container">$\sigma_* \left( \left\{ i, j \right\} \right) = \left\{ \sigma\left(i\right), \sigma\left(j\right) \right\}$</span> and thus <span class="math-container">\begin{align} f_{\sigma_* \left( \left\{ i, j \right\} \right)} = f_{\left\{ \sigma\left(i\right), \sigma\left(j\right) \right\}} = \left| x_{\sigma\left(i\right)} - x_{\sigma\left(j\right)} \right| \label{darij1.eq.3} \tag{3} \end{align}</span> (by the definition of <span class="math-container">$f_{\left\{ \sigma\left(i\right), \sigma\left(j\right) \right\}}$</span>).</p> <p>Now, <span class="math-container">\begin{align} \left|\prod_{i &lt; j} \left(x_{\sigma\left(i\right)} - x_{\sigma\left(j\right)}\right)\right| &amp; = \prod_{i &lt; j} \left| x_{\sigma\left(i\right)} - x_{\sigma\left(j\right)} \right| \\ &amp; = \prod_{\left(i, j\right) \in P} \underbrace{\left| x_{\sigma\left(i\right)} - x_{\sigma\left(j\right)} \right|}_{\substack{ = f_{\sigma_* \left( \left\{ i, j \right\} \right)} \\ \left(\text{by \eqref{darij1.eq.3}}\right)}} \\ &amp; \qquad \left(\text{since "$\prod\limits_{i &lt; j}$" is equivalent to "$\prod\limits_{\left(i, j\right) \in P}$"}\right) \\ &amp; = \prod_{\left(i, j\right) \in P} f_{\sigma_* \left( \left\{ i, j \right\} \right)} \\ &amp; = \prod_{I \in G} f_{\sigma_* \left(I\right)} \end{align}</span> (here, we have substituted <span class="math-container">$I$</span> for <span class="math-container">$\left\{ i, j \right\}$</span> in the product, since the map <span class="math-container">$G \to P, \ \left(i, j\right) \mapsto \left\{ i, j \right\}$</span> is a bijection). Thus, <span class="math-container">\begin{align} \left|\prod_{i &lt; j} \left(x_{\sigma\left(i\right)} - x_{\sigma\left(j\right)}\right)\right| = \prod_{I \in G} f_{\sigma_* \left(I\right)} = \prod_{I \in G} f_I \label{darij1.eq.4} \tag{4} \end{align}</span> (here, we have substituted <span class="math-container">$I$</span> for <span class="math-container">$\sigma_* \left(I\right)$</span> in the product, since <span class="math-container">$\sigma_*$</span> is a bijection).</p> <p>Note that the right hand side of \eqref{darij1.eq.4} does not depend on <span class="math-container">$\sigma$</span>. Applying the same reasoning to the permutation <span class="math-container">$\operatorname{id}$</span> instead of <span class="math-container">$\sigma$</span>, we thus obtain <span class="math-container">\begin{align} \left|\prod_{i &lt; j} \left(x_{\operatorname{id}\left(i\right)} - x_{\operatorname{id}\left(j\right)}\right)\right| = \prod_{I \in G} f_I . \end{align}</span> In other words, <span class="math-container">\begin{align} \left|\prod_{i &lt; j} \left(x_i - x_j\right)\right| = \prod_{I \in G} f_I . \end{align}</span> Comparing this equality with \eqref{darij1.eq.4}, we obtain <span class="math-container">$\left|\prod_{i &lt; j} \left(x_{\sigma\left(i\right)} - x_{\sigma\left(j\right)}\right)\right| = \left|\prod_{i &lt; j} \left(x_i - x_j\right)\right|$</span>. Thus, \eqref{darij1.eq.2} is proven.</p>
860,247
<p>Simplify $$\frac{3x}{x+2} - \frac{4x}{2-x} - \frac{2x-1}{x^2-4}$$</p> <ol> <li><p>First I expanded $x²-4$ into $(x+2)(x-2)$. There are 3 denominators. </p></li> <li><p>So I multiplied the numerators into: $$\frac{3x(x+2)(2-x)}{(x+2)(x-2)(2-x)} - \frac{4x(x+2)(x-2)}{(x+2)(x-2)(2-x)} - \frac{2x-1(2-x)}{(x+2)(x-2)(2-x)} $$</p></li> </ol> <p>I then tried 2 different approaches:</p> <ol> <li>Calculated it without eliminating the denominator into: $$\frac{-6x²-5x+2}{(x+2)(x-2)(2-x)}$$</li> <li>Calculated it by multiplying it out to: $$\frac{-6x+2x²+2}{(x+2)(x-2)(2-x)}$$</li> </ol> <p>I can't seem to simplify them further and so they seem incorrect. Something I missed? Help! </p>
lab bhattacharjee
33,337
<p>HINT:</p> <p>As $$x^2+x=\frac{(2x+1)^2-1^2}4$$ set $$2x+1=\sec\theta$$</p> <p>For $x=0,\sec\theta=1\implies\theta=0$ and for $x=1,\sec\theta=3\implies \theta=\arccos\frac13$</p>
860,247
<p>Simplify $$\frac{3x}{x+2} - \frac{4x}{2-x} - \frac{2x-1}{x^2-4}$$</p> <ol> <li><p>First I expanded $x²-4$ into $(x+2)(x-2)$. There are 3 denominators. </p></li> <li><p>So I multiplied the numerators into: $$\frac{3x(x+2)(2-x)}{(x+2)(x-2)(2-x)} - \frac{4x(x+2)(x-2)}{(x+2)(x-2)(2-x)} - \frac{2x-1(2-x)}{(x+2)(x-2)(2-x)} $$</p></li> </ol> <p>I then tried 2 different approaches:</p> <ol> <li>Calculated it without eliminating the denominator into: $$\frac{-6x²-5x+2}{(x+2)(x-2)(2-x)}$$</li> <li>Calculated it by multiplying it out to: $$\frac{-6x+2x²+2}{(x+2)(x-2)(2-x)}$$</li> </ol> <p>I can't seem to simplify them further and so they seem incorrect. Something I missed? Help! </p>
medicu
65,848
<p>To calculate such integrals, it is often very useful change of variable of "type Euler". $$\sqrt{x+x^2} = x-t $$ with $ x= \frac{t^2}{2t+1}$ and $dx= \frac{2t(t+1)}{2t+1} $</p> <p>Integral reduces to an integral rational calculation:</p> <p>$$\int_0^1 x^{2}\sqrt{x+x^2}dx =2\int_{1-\sqrt{2}}^0 \frac{t^6(t+1)^2}{(2t+1)^4}dt \cdots $$</p>
1,378,536
<p>Here is a question that naturally arose in the study of some specific integrals. I'm curious if for such integrals are known <em>nice real analysis tools</em> for calculating them (<em>including here all possible sources<br> in literature that are publicaly available</em>). At some point I'll add my <em>real analysis</em> solution.<br> It's a question for the informative purpose rather than finding solutions, the solution is optional.</p> <p>Prove that</p> <p>$$\int_{-1}^1 \frac{1}{\pi^2+(2 \operatorname{arctanh}(x))^2} \, dx=\frac{1}{6}. $$</p> <p><em>Here is a supplementary question</em></p> <p>$$\int_{-1}^1 \frac{\log(1-x)}{\pi^2+(2 \operatorname{arctanh}(x))^2} \, dx=\frac{1}{4}+\frac{\gamma }{6}+\frac{\log (2)}{6}-2 \log (A) $$</p> <p>where $A$ is <a href="https://en.wikipedia.org/wiki/Glaisher%E2%80%93Kinkelin_constant">Glaisher–Kinkelin constant</a>.</p> <p>for the passionates of integrals, series and limits.</p>
Random Variable
16,033
<p>After making the substitution $u = \text{arctanh}(x)$, we could use the Laplace transform $$\int_{0}^{\infty} \cos(ax) \, e^{-bx} \, dx = \frac{b}{a^{2}+b^{2}} \, , \, \text{Re} (b) &gt;0 $$</p> <p>and then switch the order of integration.</p> <p>Specifically,</p> <p>$$ \begin{align} \int_{-\infty}^{\infty} \frac{\text{sech}^{2}(u)}{4u^{2} + \pi^{2}} \, du &amp;= \frac{1}{\pi} \int_{-\infty}^{\infty} \text{sech}^{2}(u) \int_{0}^{\infty} \cos(2ut)\, e^{- \pi t} \, dt \, du \\ &amp;= \frac{1}{\pi} \int_{0}^{\infty} e^{- \pi t} \int_{-\infty}^{\infty} \text{sech}^{2} (u) \cos(2tu) \, du \, \ dt. \end{align}$$</p> <p>The inside integral is basically the second Fourier transform that Ron Gordon uses in his answer.</p> <p>A relatively quick way to evaluate it is to integrate the complex function $f(z) = \text{sech}^{2}(z) \, e^{2itz}$ around a rectangular contour in the upper half-plane of height $\pi$ and use the fact that $\text{sech}^{2} (z)$ is $i \pi$-periodic.</p> <p>Doing so we get</p> <p>$$\begin{align} \int_{-\infty}^{\infty} \text{sech}^{2}(u) \, e^{2itu} \, du - \int_{-\infty}^{\infty} \text{sech}^2(u) \, e^{2it(u+ i \pi)} \, du &amp;= 2 \pi i \, \text{Res}[f(z), i \pi /2] \\ &amp;= 2 \pi i \, \text{Res} [f(z+ i \pi/2), 0] \\ &amp;= 2 \pi i \, \text{Res} \left[-\text{csch}^{2}(z) e^{2i tz} e^{-\pi t} ,0 \right] \\ &amp;= 2 \pi i \left(-2ite^{- \pi t} \right) \end{align} $$</p> <p>since $\text{csch}^{2}(z) = \frac{1}{z^{2}} + \mathcal{O}(1).$</p> <p>Then combining the two integrals on the right, we get</p> <p>$$ \int_{-\infty}^{\infty} \text{sech}^{2}(x) \, e^{2itu} \, du = \int_{-\infty}^{\infty} \text{sech}^{2}(x) \cos(2tu) \, du = 2 \pi t \, \frac{2 \, e^{- \pi t}}{1-e^{- 2 \pi t}} = 2 \pi t \, \text{csch} (\pi t). $$</p> <p>(I couldn't' think of an approach that avoided complex analysis entirely.)</p> <p>Therefore,</p> <p>$$ \int_{-\infty}^{\infty} \frac{\text{sech}^{2}(u)}{\pi^{2}+4u^{2}} \, du = 2 \int_{0}^{\infty} t \, \text{csch} (\pi t) \, e^{- \pi t}\, dt. $$</p> <p>After making the substitution $w = 2t$, we end up with the same integral that results from using Parseval's theorem. You can refer to Ron Gordon's <a href="https://math.stackexchange.com/a/1378652/16033">answer</a> to complete the evaluation.</p>
1,378,536
<p>Here is a question that naturally arose in the study of some specific integrals. I'm curious if for such integrals are known <em>nice real analysis tools</em> for calculating them (<em>including here all possible sources<br> in literature that are publicaly available</em>). At some point I'll add my <em>real analysis</em> solution.<br> It's a question for the informative purpose rather than finding solutions, the solution is optional.</p> <p>Prove that</p> <p>$$\int_{-1}^1 \frac{1}{\pi^2+(2 \operatorname{arctanh}(x))^2} \, dx=\frac{1}{6}. $$</p> <p><em>Here is a supplementary question</em></p> <p>$$\int_{-1}^1 \frac{\log(1-x)}{\pi^2+(2 \operatorname{arctanh}(x))^2} \, dx=\frac{1}{4}+\frac{\gamma }{6}+\frac{\log (2)}{6}-2 \log (A) $$</p> <p>where $A$ is <a href="https://en.wikipedia.org/wiki/Glaisher%E2%80%93Kinkelin_constant">Glaisher–Kinkelin constant</a>.</p> <p>for the passionates of integrals, series and limits.</p>
robjohn
13,854
<p>$\newcommand{\sech}{\operatorname{sech}}\newcommand{\arctanh}{\operatorname{arctanh}}\newcommand{\Res}{\operatorname*{Res}}$ $\Res\limits_{z=\frac\pi2i}\left(\frac{\sech^2(z)}{\pi^2+4z^2}\right)=-i\frac{3+\pi^2}{12\pi^3}$ and for $k\ge1$, $\Res\limits_{z=\frac{(2k+1)\pi}2i}\left(\frac{\sech^2(z)}{\pi^2+4z^2}\right)=\frac{8z}{\left(\pi^2+4z^2\right)^2}$. Therefore, summing the residues in the upper half-plane, we get $$ \begin{align} \int_{-1}^1\frac1{\pi^2+(2\arctanh(x))^2}\,\mathrm{d}x &amp;=\int_{-\infty}^\infty\frac1{\pi^2+4u^2}\,\mathrm{d}\tanh(u)\\ &amp;=\int_{-\infty}^\infty\frac{\sech^2(u)}{\pi^2+4u^2}\,\mathrm{d}u\\ &amp;=2\pi i\left(-i\frac{3+\pi^2}{12\pi^3}+\frac{i}{4\pi^3}\sum_{k=1}^\infty\frac{2k+1}{\left(k^2+k\right)^2}\right)\\ &amp;=2\pi i\left(-i\frac{3+\pi^2}{12\pi^3}+\frac{i}{4\pi^3}\sum_{k=1}^\infty\left(\frac1{k^2}-\frac1{(k+1)^2}\right)\right)\\[4pt] &amp;=\frac16 \end{align} $$ once we collapse the telescoping series.</p>
3,190,594
<p>From Rick Durrett's book <em>Probability: Theory and Examples</em>:</p> <blockquote> <p>We define the conditional expectation of <span class="math-container">$X$</span> given <span class="math-container">$\mathcal{G}$</span>, <span class="math-container">$E(X | \mathcal{G})$</span> to be any random variable <span class="math-container">$Y$</span> that has</p> <p>(1) <span class="math-container">$Y \in \mathcal{G}, \text { i.e., is } \mathcal{G} \text { measurable }$</span></p> <p>(2) <span class="math-container">$\text {for all } A \in \mathcal{G}, \int_{A} X d P=\int_{A} Y d P$</span></p> </blockquote> <p>And in other materials I found:</p> <blockquote> <p>Let <span class="math-container">$(\Omega, \mathscr{F}, P)$</span> be a probability space and let <span class="math-container">$\mathscr{G}$</span> be a σ−algebra contained in <span class="math-container">$\mathscr{F}$</span>. For any real random variable <span class="math-container">$X \in L^{1}(\Omega, \mathscr{F}, P)$</span>, <span class="math-container">$\operatorname{define} E(X | \mathscr{G})$</span> to be the unique random variable <span class="math-container">$Z \in L^{1}(\Omega, \mathscr{G}, P)$</span> such that for every bounded <span class="math-container">$\mathscr{G}-\text { measurable }$</span> random variable <span class="math-container">$Y$</span>, <span class="math-container">$$E(X Y)=E(Z Y)$$</span></p> </blockquote>
kccu
255,727
<p>Definition 1 <span class="math-container">$\Rightarrow$</span> Definition 2 follows from Theorem 4.1.14 in <a href="https://services.math.duke.edu/~rtd/PTE/PTE5_011119.pdf" rel="nofollow noreferrer">Durrett (5th edition)</a> and by propert (2) with <span class="math-container">$A=\Omega$</span>. </p> <p>Definition 2 <span class="math-container">$\Rightarrow$</span> Definition 1 follows by taking <span class="math-container">$Y=1_A$</span> for <span class="math-container">$A \in \mathscr{G}=\mathcal{F}$</span>.</p>
1,285,177
<p>I know this is very simple and I'm missing something trivial here...</p> <p>I'm having trouble converting this set of equations to polar form:</p> <p>$$ \dot{x_1}=x_2-x_1 (x_1^2+x_2^2-1)\\ \dot{x_2}=-x_1-x_2 (x_1^2+x_2^2-1) $$</p> <p>where</p> <p>$$ r= (x_1^2+x_2^2)^{1/2}\\ \theta=\arctan\left(\frac{x_2}{x_1}\right) $$</p> <p>The book I'm going through has these converted to the following equations:</p> <p>$$ \dot{r}=-r(r^2-1)\\ \dot{\theta}=-1 $$</p> <p>Here are the steps I've taken...</p> <p>$$ \frac{dr}{dt}=(x_1\dot{x_1}+x_2\dot{x_2})(x_1^2+x_2^2)^{-1/2}\\ \dot{x_1}+\dot{x_2}=x_2-x_1-(x_1+x_2)(x_1^2+x_2^2-1)\\ \dot{x_1}+\dot{x_2}=x_2-x_1-(x_1+x_2)(r^2-1) $$</p> <p>Now I'm not sure what the next step to take would be... I've tried a few things and none of them got me to the correct result. Any help would be appreciated! :)</p>
Brian Fitzpatrick
56,960
<p>Note that $$ A=\begin{bmatrix}a&amp;b\\c&amp;d\end{bmatrix} $$ satisfies $A^\top=-A$ if and only if $$ \begin{bmatrix} a&amp;c\\b&amp;d \end{bmatrix} = \begin{bmatrix} -a&amp;-b\\-c&amp;-d \end{bmatrix} $$ That is, $A^\top=-A$ if and only if $$ A= \begin{bmatrix} a&amp;b\\c&amp;d \end{bmatrix} = \begin{bmatrix} 0&amp;b\\-b&amp;0 \end{bmatrix} = b \begin{bmatrix} 0&amp;1\\-1&amp;0 \end{bmatrix} $$ Hence $$ \DeclareMathOperator{Skew}{Skew}\Skew_2=\DeclareMathOperator{Span}{Span}\Span\left\{\begin{bmatrix}0&amp;1\\-1&amp;0\end{bmatrix}\right\} $$ In particular, $\Skew_2$ is a subspace of $M_2$ with $\dim\Skew_2=1$.</p> <p>One of the advantages of the above argument is by showing that $\Skew_2$ is spanned by a subset of $M_2$ we don't have to manually check that $\Skew_2$ is a subspace. The disadvantage, of course, is that the formulas are harder to work out in the general case $\Skew_n$. @math.noob's answer gives the most elegant proof that $\Skew_n$ is a subspace of $M_n$.</p>
1,285,177
<p>I know this is very simple and I'm missing something trivial here...</p> <p>I'm having trouble converting this set of equations to polar form:</p> <p>$$ \dot{x_1}=x_2-x_1 (x_1^2+x_2^2-1)\\ \dot{x_2}=-x_1-x_2 (x_1^2+x_2^2-1) $$</p> <p>where</p> <p>$$ r= (x_1^2+x_2^2)^{1/2}\\ \theta=\arctan\left(\frac{x_2}{x_1}\right) $$</p> <p>The book I'm going through has these converted to the following equations:</p> <p>$$ \dot{r}=-r(r^2-1)\\ \dot{\theta}=-1 $$</p> <p>Here are the steps I've taken...</p> <p>$$ \frac{dr}{dt}=(x_1\dot{x_1}+x_2\dot{x_2})(x_1^2+x_2^2)^{-1/2}\\ \dot{x_1}+\dot{x_2}=x_2-x_1-(x_1+x_2)(x_1^2+x_2^2-1)\\ \dot{x_1}+\dot{x_2}=x_2-x_1-(x_1+x_2)(r^2-1) $$</p> <p>Now I'm not sure what the next step to take would be... I've tried a few things and none of them got me to the correct result. Any help would be appreciated! :)</p>
math.n00b
135,233
<p>More generally, even if $A$ and $B$ are $n \times n$ matrices, they still form a subspace because: $(A+\lambda B)^T = A^T + \lambda B^T$</p> <p>So, if $A^T = -A$ and $B^T = -B$ we get $(A+\lambda B)^T = -(A + \lambda B)$ which proves that $A + \lambda B$ belongs to the set of skew symmetric $n \times n$ matrices.</p> <p>About the dimension, well, you just need to determine the upper triangular part of the matrix. The diagonal entries are going to be zero because for any $i$ we must have: $a_{ii} = -a_{ii}$ which implies $a_{ii} = 0$ So, how many entries you should fill in order to form an upper triangular matrix with zero diagonal entries?</p>
362,895
<p>I have been having a lot of trouble teaching myself rings, so much so that even "simple" proofs are really difficult for me. I think I am finally starting to get it, but just to be sure could some one please check this proof that $\mathbb Z[i]/\langle 1 - i \rangle$ is a field. Thank you.</p> <p>Proof: Notice that $$\langle 1 - i \rangle\\ \Rightarrow 1 = i\\ \Rightarrow 2 = 0.$$ Thus all elements of the form $a+ bi + \langle 1 - i \rangle$ can be rewritten as $a+ b + \langle 1 - i \rangle$. But since $2=0$ this implies that the elements that are left can be written as $1 + \langle 1 - i \rangle$ or $0 + \langle 1 - i \rangle$. Thus $$ \mathbb Z[i]/ \langle 1 - i \rangle = \{ 0+ \langle 1 - i \rangle , 1 + \langle 1 - i \rangle\}. $$</p> <p>This is obviously a commutative ring with unity and no zero-divisors, thus it is a finite integral domain, and hence is a field. $\square$</p>
Math Gems
75,092
<p>One must also prove that the quotient ring is $\ne \{0\}.\:$ Below is a complete proof. $\rm\quad \Bbb Z\stackrel{h}{\to}\, \Bbb Z[{\it i}\,]/(1\!-\!{\it i}\,)\:$ is $\rm\,\color{#0b0}{\bf onto,\:}$ by $\rm\:mod\,\ 1\!-\!{\it i}\,:\ {\it i}\,\equiv 1\phantom{\dfrac{|}{|}}\!\!\!\Rightarrow\:a\!+\!b\,{\it i}\,\equiv a\!+\!b\in \Bbb Z\ $<br> $\rm\quad n\in ker\ h\iff 1\!-\!{\it i}\,\mid n\iff\phantom{\dfrac{|}{|_|}}\!\!\!\!\!\!\! \dfrac{n}{1\!-\!{\it i}}\, =\, \dfrac{n\,(1\!+\!{\it i}\,)}2\,\in\, \Bbb Z[{\it i}\,] \iff \color{#c00}2\mid n\ $<br> $\rm\quad So \ \ \ \Bbb Z[{\it i}\,]/(1\!-\!{\it i}\,)\, \color{#0b0}{\bf =\ Im\:h}\,\cong\, \Bbb Z/ker\:h \,=\, \Bbb Z/\color{#c00}2\,\Bbb Z\, =\, \Bbb F_2\ $ $\ \ $ <strong>QED</strong></p>
3,632,097
<p>Given a sheaf <span class="math-container">$F$</span> on a topological space <span class="math-container">$X$</span> and <span class="math-container">$U$</span> is an open subset of <span class="math-container">$X$</span>. Denote <span class="math-container">$F|_U$</span> be the restricted sheaf of <span class="math-container">$F$</span>. Then to any <span class="math-container">$y\in U$</span>. Do we have <span class="math-container">$F_y=(F|_U)_y$</span>?</p>
diracdeltafunk
19,006
<p>@Bueggi has the right answer from first principles. I'd like to mention a slightly more abstract way to think about it, which I hope will be helpful to see if you're starting to learn about sheaves.</p> <p>If <span class="math-container">$*$</span> is a topological space with one point, the category sheaves on <span class="math-container">$*$</span> is canonically isomorphic to the category of sets by taking global sections, that is, <span class="math-container">$F \mapsto F(*) : \mathsf{Sh}(*) \xrightarrow{\Gamma} \mathsf{Set}$</span> is an isomorphism of categories.</p> <p>Now, if <span class="math-container">$X$</span> is a topological space and <span class="math-container">$p \in X$</span>, there is a unique continuous map <span class="math-container">$i_p : * \to X$</span> whose image is <span class="math-container">$\{p\}$</span>. For any sheaf <span class="math-container">$F$</span> on <span class="math-container">$X$</span>, <span class="math-container">$F_x$</span> is precisely the set of global sections of the inverse image sheaf <span class="math-container">$i_p^{-1} F$</span> — you should try to prove this! In other words, the composition <span class="math-container">$\mathsf{Sh}(X) \xrightarrow{i_p^{-1}} \mathsf{Sh}(*) \xrightarrow{\Gamma} \mathsf{Set}$</span> is equal to the stalk functor <span class="math-container">$F \mapsto F_p : \mathsf{Sh}(X) \to \mathsf{Set}$</span> (this requires also checking what happens to morphisms).</p> <p>From this perspective, it's easy to see what happens in your question: let <span class="math-container">$p \in U$</span> where <span class="math-container">$U$</span> is an open subset of <span class="math-container">$X$</span>. Then the map <span class="math-container">$i_p : * \to X$</span> factors as <span class="math-container">$i_U \circ j_p$</span> where <span class="math-container">$j_p : * \to U$</span> has image <span class="math-container">$\{p\}$</span> and <span class="math-container">$i_U : U \to X$</span> is the inclusion. By functoriality of the inverse image construction, we have that <span class="math-container">$i_p^{-1} = (i_U \circ j_p)^{-1} = j_p^{-1} \circ i_U^{-1}$</span>. If <span class="math-container">$F$</span> is a sheaf on <span class="math-container">$X$</span>, then <span class="math-container">$i_U^{-1}(F) = F|_U$</span>, so <span class="math-container">$i_p^{-1}(F) = j_p^{-1}(F|_U)$</span>. The set of global sections of the left-hand side is just <span class="math-container">$F_p$</span>, and the set of global sections of the right-hand side is <span class="math-container">$(F|_U)_p$</span>.</p> <p>To rephrase: taking the stalk at <span class="math-container">$p$</span> is equivalent to pulling back to the point <span class="math-container">$\{p\}$</span>. Pulling back sheaves is a functorial construction, so it doesn't matter if pull back directly to <span class="math-container">$\{p\}$</span> or pull back to <span class="math-container">$U$</span> first and then pull back to <span class="math-container">$\{p\}$</span>. The pullback of a sheaf to an open subset is just the restriction, so we have the desired result.</p> <p>This point of view can be helpful for thinking about more than just stalks of restrictions; for example this argument shows that pulling back a sheaf "preserves its stalks" in the following sense:</p> <p><strong>Fact</strong> Let <span class="math-container">$f : X \to Y$</span> be a continuous map between topological spaces. Let <span class="math-container">$F$</span> be a sheaf on <span class="math-container">$Y$</span> and let <span class="math-container">$x \in X$</span>. Then <span class="math-container">$(f^{-1}F)_x = F_{f(x)}$</span>.</p>
874,946
<p>What is the remainder when the below number is divided by $100$? $$ 1^{1} + 111^{111}+11111^{11111}+1111111^{1111111}+111111111^{111111111}\\+5^{1}+555^{111}+55555^{11111}+5555555^{1111111}+55555555^{111111111} $$ How to approach this type of question? I tried to brute force using Python, but it took very long time.</p>
Ben Grossmann
81,360
<p>Two facts help here:</p> <ol> <li>if $a \equiv b \pmod m$, then $a^n \equiv b^n \pmod m$</li> <li>For any $a$ relatively prime to $100$, $a^{40} \equiv 1 \pmod {100}$</li> </ol> <p>So, for example, $$ 111^{111} \equiv 11^{111} \equiv (11^{40})^2 11^{31} \equiv 11^{31} \pmod{100} $$</p>
874,946
<p>What is the remainder when the below number is divided by $100$? $$ 1^{1} + 111^{111}+11111^{11111}+1111111^{1111111}+111111111^{111111111}\\+5^{1}+555^{111}+55555^{11111}+5555555^{1111111}+55555555^{111111111} $$ How to approach this type of question? I tried to brute force using Python, but it took very long time.</p>
lab bhattacharjee
33,337
<p>HINT:</p> <p>$$(1+10n)^{1+10n}=1+\binom{1+10n}1(10n)\pmod{100}\equiv1+10n$$</p> <p>and $$(5+50n)^{1+10n}=5^{1+10n}+\binom{1+10n}1(50n)5^{10n}\pmod{100}$$</p> <p>Now, $$5^{m+2}-5^2=5^2(5^m-1)\equiv0\pmod{100}\implies5^{m+2}\equiv25\pmod{100}$$ for integer $m\ge0$</p> <p>$$\implies5^{1+10n}+\binom{1+10n}1(50n)5^{10n}\equiv25+(1+10n)(50n)25\pmod{100}$$ $$\equiv25+1250n$$ for $n\ge1$</p> <p>For odd $n,$ $$(5+50n)^{1+10n}\equiv25+50\pmod{100}$$</p>
1,067,131
<p>I'm reading a analysis book for fun and I got stuck on a problem.</p> <p>The task is to find the function $f$ if $$f(x-y,x+y) = \frac{x^2 + y^2}{2xy}$$</p> <p>Since I can see the solution $\frac{x^2 + y^2}{y^2 - x^2}$ from the book (it's given in the back), I can backwards engineer the solution:</p> <p>$$ \frac{(x-y)^2 + (x+y)^2}{(x+y)^2 - (x-y)^2} = \frac{x^2 - 2xy + y^2 + x^2 + 2xy + y^2}{x^2+2xy+y^2-x^2+2xy-y^2} = \frac{2x^2+2y^2}{4xy} = \frac{2(x^2+y^2)}{2\cdot2xy} = \frac{x^2+y^2}{2xy} = f(x-y,x+y)$$</p> <p>But I don't think this problem is meant to be solved by knowing the solution first. So my question is how would you solve this problem if you wouldn't know the answer? Is there something that's like a procedure you can follow or do you just have to be clever enough to think of the intermediate steps?</p>
GPerez
118,574
<p>If you've studied the cyclotomic polynomial the answer becomes quite simple! Since $\omega_7=e^{2\pi i/7}$ is a root of the seventh cyclotomic polynomial $\Phi_7$, and $\Phi_7$ can be shown to be irreducible, the degree of the extension is $\deg\Phi_7 = \varphi(7) = 6$, because $7$ is prime ($\varphi$ denotes the Euler totient function). In this proof we also use what Arthur has pointed out, namely that we can drop $\omega_7^5 $ (which I still leave to prove).</p>
1,067,131
<p>I'm reading a analysis book for fun and I got stuck on a problem.</p> <p>The task is to find the function $f$ if $$f(x-y,x+y) = \frac{x^2 + y^2}{2xy}$$</p> <p>Since I can see the solution $\frac{x^2 + y^2}{y^2 - x^2}$ from the book (it's given in the back), I can backwards engineer the solution:</p> <p>$$ \frac{(x-y)^2 + (x+y)^2}{(x+y)^2 - (x-y)^2} = \frac{x^2 - 2xy + y^2 + x^2 + 2xy + y^2}{x^2+2xy+y^2-x^2+2xy-y^2} = \frac{2x^2+2y^2}{4xy} = \frac{2(x^2+y^2)}{2\cdot2xy} = \frac{x^2+y^2}{2xy} = f(x-y,x+y)$$</p> <p>But I don't think this problem is meant to be solved by knowing the solution first. So my question is how would you solve this problem if you wouldn't know the answer? Is there something that's like a procedure you can follow or do you just have to be clever enough to think of the intermediate steps?</p>
Jyrki Lahtonen
11,619
<p>Extended hint:</p> <p>The field $\Bbb{Q}(\omega_7+\omega_7^5)$ is contained in the 7th cyclotomic field $\Bbb{Q}(\omega_7)$. That field is an abelian extension of $\Bbb{Q}$, so all intermediate fields are themselves Galois extensions of $\Bbb{Q}$. This follows from Galois correspondence as all subgroups of an abelian group are normal. </p> <p>To make much progress here you should be familiar with the Galois group of $G=Gal(\Bbb{Q}(\omega_7)/\Bbb{Q})$. That group is cyclic of order six, and an automorphism $\sigma\in G$ is fully determined by $\sigma(\omega_7)=\omega_7^k$, where $k$ can be any integer in the range $1,2,3,4,5,6$. Denote that automorphism by $\sigma_k$.</p> <p>The degree of any algebraic extension $K(a)/K$ is the degree of the minimal polynomial of $a$ over $K$. When the extension is separable (which is the case here, because ________ ) the zeros of the minimal polynomial are all simple and exactly the conjugates of $a$. This gives us the idea:</p> <blockquote> <p>Count the number of conjugates of $\omega_7+\omega_7^5$. These are the numbers $\sigma_k(\omega_7+\omega_7^5)$. Write all of them ($k=1,2,\ldots,6$) in terms of low powers of $\omega$, and check how many of them are distinct.</p> </blockquote>
902,522
<p>How would I simplify a fraction that has a radical in it? For example:</p> <p>$$\frac{\sqrt{2a^7b^2}}{{\sqrt{32b^3}}}$$</p>
Wouter Stekelenburg
27,375
<p>The right way to build such a category is a philosophical question. There are different approaches in the mathematical literature. One thing is clear though: the objects should be propositions, not just theorems.</p> <p>The problem is to define equality of proofs in a sensible way. For example, let $\Pi$ be Pythagoras' theorem. Should each of the over 100 proofs of $\Pi$ found <a href="http://www.cut-the-knot.org/pythagoras/" rel="noreferrer">here</a> be a different morphism $\top\to\Pi$? In that case, it is hard to see how composition of proofs can be defined in such a way that there is a unique "identity proof" for each proposition.</p> <p>One approach is to consider some proofs <em>essentially equal</em> if some superficial transformations turn one proof into the other. This, however, shifts the problem of defining the equality of proofs to the problem of defining the equality of transformations of proofs. So proofs and propositions are actually part of some $\infty$-category. If you like this line of reasoning, take a look at <a href="http://homotopytypetheory.org/" rel="noreferrer">homotopy type theory</a> and its implementation in various <a href="http://en.wikipedia.org/wiki/Proof_assistant" rel="noreferrer">proof assistents</a>.</p> <p>Another approach is to simply consider every proof equal to any other proof of the same proposition, so that the category of propositions and proofs is a poset. For classical first order logic this poset is known as the <a href="http://en.wikipedia.org/wiki/Lindenbaum%E2%80%93Tarski_algebra" rel="noreferrer">Lindenbaum-Taski algebra</a>.</p> <p>The <a href="http://en.wikipedia.org/wiki/Lambda_calculus" rel="noreferrer">$\lambda$-calculus</a> is a middle way between the infinity categories and the posets. Proofs can be encoded as $\lambda$-terms. A lot of irrelevant differences between proofs are lost in this encoding. There are equivalence relations on $\lambda$-term based on transformations like $\beta$-reduction. The counterpart of $\lambda$-calculi are <a href="http://en.wikipedia.org/wiki/Cartesian_closed_category" rel="noreferrer">Cartesian closed categories</a>.</p>
813,395
<p>I have read that linear independence occurs when:</p> <p>$$\sum_{i=1}^n a_i v_i =0$$</p> <p>Has only $a_i=0$ as a solution, but what if all $v_i$ were $0$ then $a_i$ could vary and still yield $0$. Does that mean that such a vector set is not linearly independent?</p> <p>What if I have:</p> <p>Let $\{c_0,c_1,c_2,\dots,c_n\}$ denote a set of $n+1$ distinct elements in $\mathbb{R}$. Define the set of $n+1$ polynomials.</p> <p>$$f_j(x)=\prod_{k=0,k\ne j}^n \frac{x-c_k}{c_j - c_k} $$</p> <p>Note that $f_j(x) \in P_n(\mathbb{R})$ with the property $$f_j(c_l) = \left\{ \begin{align} 0&amp;&amp; \text{if}&amp;&amp; j\ne l\\ 1&amp;&amp; \text{if}&amp;&amp; j= l \end{align} \right.$$</p> <p>And $\alpha = \{f_0(x),f_1(x),\dots,f_n(x)\}$, then this is or isn't linearly independent based on my $x$ value. Is there something here that forces $x$ to equal one of my $c_j$? For I am told that this $\alpha$ is linearly independent.</p>
Andreas Caranti
58,401
<p>Any set of vectors containing zero is linearly dependent, that is, <em>not</em> linearly independent.</p> <p>This is simply because, as you have said, if $v_{1} = 0$, say, then $$ 1 \cdot v_{1} + 0 \cdot v_{2} + \dots + 0 \cdot v_{n} = 0, $$ and not all coefficients are zero.</p>
3,011,862
<p>Test the convergence <span class="math-container">$$\int_0^1 \frac{x^n}{1+x}dx$$</span></p> <p>I have used comparison test for improper integrals..by comparing with <span class="math-container">$1/(1+x)$</span>... so I found it convergent .. But the solution set says that it is convergent if <span class="math-container">$n&gt; -1$</span>.</p>
fleablood
280,126
<p>Pick the last digit <em>first</em>. You have <span class="math-container">$3$</span> choices (<span class="math-container">$2,6$</span> or <span class="math-container">$8$</span>).</p> <p>Then pick the first digit. As you have already picked a digit you have only <span class="math-container">$4$</span> choices remaining.</p> <p>Then pick the second digit. As you have already picked two you have only three choices remaining.</p> <p>So the number of ways to do this is <span class="math-container">$3*4*3 = 36$</span></p> <p>If you <em>don't</em> pick the last digit first you will have <span class="math-container">$5$</span> choices for the first choice and <span class="math-container">$4$</span> for the second but when coming to choosing the last digit you do not know if you have <span class="math-container">$3$</span> choices, <span class="math-container">$2$</span> chooses or only <span class="math-container">$1$</span>. So you can't do it that way without tricky subcases (which can be done but is trickier). ANd it doesn't matter which order you choose the digits. So it's a good idea to pick the most restrictive digit first. </p> <p>Maybe the following make it clearer.</p> <p><span class="math-container">$\overbrace{\begin{cases}\overbrace{\begin{cases} \\ \overbrace{\begin{cases} 152\\162\\182 \end{cases}}^{\text{first digit is 1 (there are 3 choices for the second digit)}}\\ \overbrace{\begin{cases} 512\\562\\582 \end{cases}}^{\text{first digit is 5 (there are 3 choices for the second digit)}}\\ \overbrace{\begin{cases} 612\\652\\682 \end{cases}}^{\text{first digit is 6 (there are 3 choices for the second digit)}}\\ \overbrace{\begin{cases} 812\\852\\862 \end{cases}}^{\text{first digit is 8 (there are 3 choices for the second digit)}}\\ \end{cases}}^{\text{last digit is 2 (there are 4 choices for the first digit)}}\\ \overbrace{\begin{cases} \\ \overbrace{\begin{cases} 156\\126\\186 \end{cases}}^{\text{first digit is 1 (there are 3 choices for the second digit)}}\\ \overbrace{\begin{cases} 216\\256\\286 \end{cases}}^{\text{first digit is 2 (there are 3 choices for the second digit)}}\\ \overbrace{\begin{cases} 516\\526\\586 \end{cases}}^{\text{first digit is 5 (there are 3 choices for the second digit)}}\\ \overbrace{\begin{cases} 816\\826\\856 \end{cases}}^{\text{first digit is 8 (there are 3 choices for the second digit)}}\\ \end{cases}}^{\text{last digit is 6 (there are 4 choices for the first digit)}}\\ \overbrace{\begin{cases} \\ \overbrace{\begin{cases} 128\\158\\168 \end{cases}}^{\text{first digit is 1 (there are 3 choices for the second digit)}}\\ \overbrace{\begin{cases} 218\\258\\268 \end{cases}}^{\text{first digit is 2 (there are 3 choices for the second digit)}}\\ \overbrace{\begin{cases} 518\\528\\568 \end{cases}}^{\text{first digit is 5 (there are 3 choices for the second digit)}}\\ \overbrace{\begin{cases} 618\\628\\658 \end{cases}}^{\text{first digit is 6 (there are 3 choices for the second digit)}}\\ \end{cases}}^{\text{last digit is 8 (there are 4 choices for the first digit)}}\end{cases}}^{\text{There are 3 choices for the last digit; either 2,6, or 8}}$</span></p>
3,546,615
<p>Why do we take thickness be differential of distance apart of elemental mass when calculating volume and be differential length of arc when calculating area of the sphere when integrating in terms of angle.</p> <p><a href="https://i.imgur.com/Mw8oW85m.jpg" rel="nofollow noreferrer"><img src="https://i.imgur.com/Mw8oW85m.jpg" alt="sphere"></a> <a href="https://i.stack.imgur.com/VuPgUm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VuPgUm.png" alt="elemental part"></a></p> <p>Before going into depth I refer <a href="https://math.stackexchange.com/questions/3546395/why-doesnt-this-method-work-for-getting-volume-of-a-sphere-and-how-to-find-vol">this thread</a> first.</p> <p>So what I learnt is that when getting small volume we take <span class="math-container">$$dl = d(R\sin θ) = R\cos θ\cdot dθ &lt;&lt;$$</span> <span class="math-container">$$r = R\cosθ$$</span> <span class="math-container">$$dV = πr^2\cdot dl$$</span></p> <p>While when calculating area we take <span class="math-container">$dl$</span>, <span class="math-container">$$dθ = \frac{dl}{R}$$</span> <span class="math-container">$$dl = R\cdot dθ &lt;&lt;$$</span> <span class="math-container">$$dA = 2πr\cdot dl$$</span></p> <p>But like why different elemental thickness (<span class="math-container">$dl$</span>) for those calculations?</p> <blockquote> <p>If I wasn't able to make you understand what I said, then see the second figure, the <span class="math-container">$h$</span> is taken as <span class="math-container">$dl$</span> in calculation of volume while the <em>curved-surface</em> is used as <span class="math-container">$dl$</span> for calculation of area</p> </blockquote> <p><strong>My Question is: <em>WHY</em></strong></p>
Emilio Novati
187,568
<p>This is because <span class="math-container">$\theta \to 0$</span> so <span class="math-container">$$ \frac{d \sin \theta}{d\theta}=\lim_{\theta \to 0}\frac{\sin \theta}{\theta}=1 $$</span> In other words if <span class="math-container">$\theta$</span> is small the difference with <span class="math-container">$\sin \theta$</span> is much more small.</p>
1,483,489
<p>What I have trying is:</p> <p>Suppose that $f(x)$ has at least one zero $\alpha$ such that $f(x) = (x - \alpha)^sq(x)$, $s &gt; 1$ in some extension. Then I guess that $(x-\alpha)^{s-1} \mid f(x)$. So, $f(x)$ is not irreducible, where $f(x) = (x-a)^{s-1}h(x)$. But is seems wrong once I neither used the hypothesis $\operatorname{char} F = 0$.</p> <p>What I am loosing? Could someone help me?</p> <p>Thanks a lot.</p>
André Nicolas
6,312
<p>Outline: We can assume that $f$ has degree $n\gt 1$. Then $f'(x)$ has degree $n-1$. (This is where we use characteristic $0$. In characteristic $p$, this part can fail. For example, the derivative of $x^p+1$ is the $0$ polynomial.) </p> <p>Since $f$ is irreducible, $f(x)$ and $f'(x)$ are relatively prime over $F$. By the Bezout Identity, they are relatively prime over any extension field $K$ of $f$. But then $f(x)$ cannot have a root of multiplicity $\gt 1$ over $K$, since if $(x-a)^2$ divides $f(x)$ over $K$, then $f(x)$ and $f'(x)$ are not relatively prime over $K$: each is divisible by $x-a$.</p>
1,012,158
<p>$$ y\in R$$ Prove: <br> if for every positive number $b$: $$ \left\lvert y \right\rvert \leq b $$ so $y=0$</p> <p>I tried seperating into cases where</p> <p>$$ -b\leq y\leq 0 $$ and $$ 0\leq y\leq b $$</p> <p>But I can't see where it helps me, any ideas? thanks</p>
Empiricist
189,188
<p>Assume on the contrary $y \neq 0$.</p> <p>Then $\frac{|y|}{2} &gt; 0$ and hence $\frac{|y|}{2} \geq |y|$ and thus $|y| \leq 0$. Contradiction.</p>
3,805,286
<p>This is a question on the convergence of a sequence of real, convex, analytic functions (it does not get better than that!):</p> <p>Let <span class="math-container">$(f_n)_{n\in \mathbb N}$</span> be a sequence of convex analytic functions on <span class="math-container">$\mathbb R$</span>.</p> <p>Suppose that <span class="math-container">$f_n(x) \to f(x)$</span> as <span class="math-container">$n \to \infty$</span> for all <span class="math-container">$x \in \mathbb R$</span> (or in <span class="math-container">$\mathbb R^+$</span>).</p> <p>Is <span class="math-container">$f(x)$</span> analytic?</p>
Greg Martin
16,078
<p>No—not even necessarily differentiable! The function <span class="math-container">$f_n(x) = \frac1n\log(1+e^{nx})$</span> is convex and analytic on <span class="math-container">$\Bbb R$</span>, but <span class="math-container">$$ \lim_{n\to\infty} \frac1n\log(1+e^{nx}) = \begin{cases} 0, &amp;\text{if } x\le 0, \\ x, &amp;\text{if } x\ge0. \end{cases} $$</span></p>
3,322,492
<p><strong>Prove:</strong> <span class="math-container">$A \cap (B - C) = (A \cap B) − (A \cap C)$</span></p> <p>I can understand this using Venn Diagrams, however I am struggling to translate this into a formal proof. </p>
Henno Brandsma
4,280
<p><span class="math-container">$$A \cap (B-C) = A \cap (B \cap C^\complement)$$</span> while</p> <p><span class="math-container">$$(A\cap B) - (A \cap C) = (A \cap B) \cap (A \cap C)^\complement = \\ (A \cap B) \cap (A^\complement \cup C^\complement) (\text{ de Morgan) } =\\ (A \cap B \cap A^\complement) \cup (A \cap B \cap C^\complement) = A \cap B \cap C^\complement$$</span></p> <p>so we have equality (the first component of the final union is empty).</p>
4,463
<p>It seems that most authors use the phrase "elementary number theory" to mean "number theory that doesn't use complex variable techniques in proofs." </p> <p>I have two closely related questions.</p> <ol> <li>Is my understanding of the usage of "elementary" correct?</li> <li>It appears that advanced techniques from other areas (e.g. algebra) are allowed, just not complex variables. Are there historical reasons for why complex analysis singled out as a tool to avoid? </li> </ol> <p>NB: I'm asking about how "elementary" usually <strong>is</strong> defined and why, not how it <strong>should be</strong> defined.</p>
Michael Hoffman
429
<p>Whenever I've heard the term "elementary number theory", the speaker seems to mean "analytic number theory"</p> <p>I would imagine that the reason for not using complex numbers is at least partially related to the idea that they're going over old results, that were essentially developed before the development of complex analysis.</p> <p>Hope that helps</p>
4,463
<p>It seems that most authors use the phrase "elementary number theory" to mean "number theory that doesn't use complex variable techniques in proofs." </p> <p>I have two closely related questions.</p> <ol> <li>Is my understanding of the usage of "elementary" correct?</li> <li>It appears that advanced techniques from other areas (e.g. algebra) are allowed, just not complex variables. Are there historical reasons for why complex analysis singled out as a tool to avoid? </li> </ol> <p>NB: I'm asking about how "elementary" usually <strong>is</strong> defined and why, not how it <strong>should be</strong> defined.</p>
Community
-1
<p>Your usage of "elementary" is correct; your definition is the one that most number theorists would use. You don't have to take my word for it however; just consider the first sentence of <a href="http://www.jstor.org/pss/1969455" rel="noreferrer">Selberg's Elementary Proof of the Prime Number Theorem</a>:</p> <p><em>In this paper will be given a new proof of the prime-number theorem, which is elementary in the sense that it uses practically no analysis, except the simplest properties of the logarithm.</em></p> <p>Ironically, of the many known proofs of the prime-number theorem, this <em>elementary</em> proof ranks as one of the most complicated.</p>
4,463
<p>It seems that most authors use the phrase "elementary number theory" to mean "number theory that doesn't use complex variable techniques in proofs." </p> <p>I have two closely related questions.</p> <ol> <li>Is my understanding of the usage of "elementary" correct?</li> <li>It appears that advanced techniques from other areas (e.g. algebra) are allowed, just not complex variables. Are there historical reasons for why complex analysis singled out as a tool to avoid? </li> </ol> <p>NB: I'm asking about how "elementary" usually <strong>is</strong> defined and why, not how it <strong>should be</strong> defined.</p>
Alexey Ustinov
5,712
<p>Probably there is no correct bounary between elementary and non-elementary number theory. There two possibilities: either we can apply limits or not. It is like the axiom of choice in the set theory.</p> <p>1) If we have no <span class="math-container">$\lim$</span> then we have no <span class="math-container">$\pi$</span>, no <span class="math-container">$e$</span>, no little <span class="math-container">$o$</span>,... Almost nothing.</p> <p>2) If we have a <span class="math-container">$\lim$</span> then we can get any "non-elementary" construction as a limiting case of some "elementary" one. Almost all.</p> <p>This answer is motivated by "primarily opinion-based" question <a href="https://mathoverflow.net/questions/150895/is-discrete-fourier-series-an-elementary-object">Is Discrete Fourier Series an elementary object?</a></p>
459,579
<blockquote> <p>Find the value of $3^9\cdot 3^3\cdot 3\cdot 3^{1/3}\cdot\cdots$</p> </blockquote> <p>Doesn't this thing approaches 0 at the end? why does it approaches 1?</p>
Harish Kayarohanam
30,423
<p>$ 3^{12} \times 3^{sum\ of\ geometric\ series }$ </p> <p>geometric series is </p> <p>$ 1 + 1/3 + 1/9 + .... $ </p> <p>$ = 1 / (1-1/3) $ </p> <p>$ = 3/2 $ </p> <p>so </p> <p>$= 3^ {12 + \frac{3}{2} } $ </p> <p>$= 3 ^{ 27/2 } $</p>
192,394
<p>I'm re-reading some material from Apostol's Calculus. He asks to prove that, if $f$ is such that, for any $x,y\in[a,b]$ we have</p> <p>$$|f(x)-f(y)|\leq|x-y|$$</p> <p>then:</p> <p>$(i)$ $f$ is continuous in $[a,b]$</p> <p>$(ii)$ For any $c$ in the interval,</p> <p>$$\left|\int_a^b f(x)dx-(b-a)f(c)\right|\leq\frac{(b-a)^2}{2}$$</p> <p>The proof for the first part is easy, and I ommit it. I'm interested in the second one.</p> <p>We can write that as</p> <p>$$\left| {\int_a^b f (x)dx - \int_a^b f (c)dx} \right| \leqslant \frac{{{{(b - a)}^2}}}{2}$$</p> <p>Or $$\left| {\int_a^b {\left( {f(x) - f(c)} \right)dx} } \right| \leqslant \frac{{{{(b - a)}^2}}}{2}$$</p> <p>Now, it is not hard to show that</p> <p>$$\left| {\int_a^b {\left( {f(x) - f(c)} \right)dx} } \right| \leqslant \int_a^b {\left| {f(x) - f(c)} \right|dx} $$</p> <p>By hypothesis, we have</p> <p>$$\left| {f(x) - f(c)} \right| \leqslant \left| {x - c} \right|$$</p> <p>so that</p> <p>$$\left| {\int_a^b {\left( {f(x) - f(c)} \right)dx} } \right| \leqslant \int_a^b {\left| {f(x) - f(c)} \right|dx} \leqslant \int\limits_a^b {\left| {x - c} \right|dx} $$</p> <p>The last term, integrates as follows:</p> <p>$$\int\limits_a^b {\left| {x - c} \right|dx} = - \int\limits_a^c {\left( {x - c} \right)dx} + \int\limits_c^b {\left( {x - c} \right)dx} = \frac{{{{\left( {b - c} \right)}^2} + {{\left( {a - c} \right)}^2}}}{2}$$</p> <p>How can I conciliate that with $$\frac{{{{\left( {b - a} \right)}^2}}}{2}?$$</p> <p>I'd like to know what happens in the general case</p> <p>$$|f(x)-f(y)|\leq \lambda |x-y|$$ too.</p>
Seirios
36,434
<p>Another way is to write $$(b-a)^2= ((b-c)-(a-c))^2= (b-c)^2+ (a-c)^2-2(b-c)(a-c) \geq (b-c)^2+(a-c)^2$$ because $(b-c)(a-c) \leq 0$.</p> <p>For your second question, consider $\displaystyle \frac{1}{\lambda} f$ and apply your first result.</p>
261,031
<p>i hope some of you can support to solve my problem, i need to work on data in the following way, where the length of each of the lists or sublists is equal. As an example i want to share the data-pattern with you:</p> <pre><code>list1={a,b,c}; list2={{d,e,f},{g,h,i},......} (in reality the number of sublists in list2 is about 30) data=list2[[1]] </code></pre> <p>the goal is now to combine these lists in the following form <code>{ {{a,d},{b,e},{c,f}}, {{a,g},...} ...}</code> .</p> <hr /> <p>The next step to plot it or to create a fit-formula. The values of list1 should be always plotted as the x-Data. I created some of the most interesting datasets as follows: `</p> <pre><code>plottedList=Table[{list1[[k]], data[[k]]}, {k, 1, Length[data]}] ListPlot[plottedList] </code></pre> <p>My Problem is that i now would need to plot all of the data-pairs and combine the data to create the lists which i can find a linear or nonlinear fit.</p> <hr /> <p>I hope you can give me some advice, Best Chris!</p>
user1066
106
<pre><code>Inner[{#2,#1}&amp;,list2,list1,List] (* {{{a, d}, {b, e}, {c, f}}, {{a, g}, {b, h}, {c, i}}} *) </code></pre> <p><strong>A Slot-free version</strong></p> <p><a href="https://mathematica.stackexchange.com/users/50/j-m-cant-deal-with-it">J. M. can't deal with it </a> (in a comment) gives the following neat modification:</p> <pre><code>Inner[ReverseApplied[List], list2, list1, List] </code></pre> <p><strong>Lists</strong></p> <pre><code>list1 = {a, b, c}; list2 = {{d, e, f}, {g, h, i}}; </code></pre>
2,961,023
<p>Is it allowed to solve this inequality <span class="math-container">$x|x-1|&gt;-3$</span> by dividing each member with <span class="math-container">$x$</span>? What if <span class="math-container">$x$</span> is negative?</p> <p>My textbook provides the following solution:</p> <blockquote> <p>Divide both sides by <span class="math-container">$x: $</span> <span class="math-container">$\frac { x | x - 1 | } { x } &gt; \frac { - 3 } { x } ; \quad x \neq 0$</span></p> <p>Simplify: <span class="math-container">$| x - 1 | &gt; - \frac { 3 } { x } ; \quad x \neq 0$</span></p> </blockquote> <p>Edit: provided textbook's solution</p>
Will Jagy
10,400
<p><img src="https://i.stack.imgur.com/bKIyr.jpg" alt=""> <span class="math-container">${}{}{}{}{}{}{}{}{}{}{}{}{}{}$</span></p>
4,262,888
<p>My task is to prove that if an atomic measure space is <span class="math-container">$\sigma$</span>-finite, then the set of atoms must be countable.</p> <p>This is my given definition of an atomic measure space:</p> <blockquote> <p>Assume <span class="math-container">$(X,\mathcal{M},\mu)$</span> is a measure space with all single points being measurable. An <strong>atom</strong> is a point <span class="math-container">$x$</span> with <span class="math-container">$\mu(\{x\}) &gt; 0$</span>. Letting <span class="math-container">$\mathcal{A}$</span> be the set of atoms, <span class="math-container">$(X,\mathcal{M},\mu)$</span> is called <strong>atomic</strong> if <span class="math-container">$\mathcal{A}\in\mathcal{M}$</span> and <span class="math-container">$\mu(\mathcal{A^c}) = 0$</span>.</p> </blockquote> <hr /> <p>I didn't know how to prove this at first, so I looked it up on stack exchange and found <a href="https://math.stackexchange.com/a/850597/933963">this answer</a>: (I do not have enough reputation to comment on the original post)</p> <blockquote> <p>Here's how to prove your claim, with the appropriate assumption. Let <span class="math-container">$S\subset X$</span> be the set of atoms for some measure <span class="math-container">$\mu$</span> on <span class="math-container">$X$</span>. Let <span class="math-container">$\{U_i\}$</span> be a countable measurable partition of <span class="math-container">$X$</span>. Then if <span class="math-container">$S$</span> is uncountable, some <span class="math-container">$U_i$</span> contains an uncountable subset <span class="math-container">$S'$</span> of <span class="math-container">$S$</span>, and <span class="math-container">$\mu(U_i)\geq \sum_{x\in S'}\mu(x)=\infty$</span> since any uncountable sum of positive numbers diverges. Thus <span class="math-container">$\mu$</span> is not <span class="math-container">$\sigma$</span>-finite.</p> </blockquote> <p>My question is why do we have that <span class="math-container">$\mu(U_i) \geq \sum_{x\in S'} \mu(x)$</span> ? I am assuming that this inequality comes from subadditivity of <span class="math-container">$\mu$</span> but as I have understood it subadditivity is defined for countable unions, not for uncountable unions so I am confused as to how we arrive at an uncountable sum in this step.</p>
Lazy
958,820
<p>That does appear to be a bit sloppy. But you can mend that by simply specifying the sum over any countable selection and taking the supremum of that. You can easily show: If you have an uncountable family of positive values, then for any countable finite sum there exists an even larger one, so this sup must be <span class="math-container">$\infty$</span> (for else take a sequence of such countable selections so that the sum converges to the sup. Then the union of all these selections is also a countable selection, and at least as large as the limit).</p>