qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
4,262,888
<p>My task is to prove that if an atomic measure space is <span class="math-container">$\sigma$</span>-finite, then the set of atoms must be countable.</p> <p>This is my given definition of an atomic measure space:</p> <blockquote> <p>Assume <span class="math-container">$(X,\mathcal{M},\mu)$</span> is a measure space with all single points being measurable. An <strong>atom</strong> is a point <span class="math-container">$x$</span> with <span class="math-container">$\mu(\{x\}) &gt; 0$</span>. Letting <span class="math-container">$\mathcal{A}$</span> be the set of atoms, <span class="math-container">$(X,\mathcal{M},\mu)$</span> is called <strong>atomic</strong> if <span class="math-container">$\mathcal{A}\in\mathcal{M}$</span> and <span class="math-container">$\mu(\mathcal{A^c}) = 0$</span>.</p> </blockquote> <hr /> <p>I didn't know how to prove this at first, so I looked it up on stack exchange and found <a href="https://math.stackexchange.com/a/850597/933963">this answer</a>: (I do not have enough reputation to comment on the original post)</p> <blockquote> <p>Here's how to prove your claim, with the appropriate assumption. Let <span class="math-container">$S\subset X$</span> be the set of atoms for some measure <span class="math-container">$\mu$</span> on <span class="math-container">$X$</span>. Let <span class="math-container">$\{U_i\}$</span> be a countable measurable partition of <span class="math-container">$X$</span>. Then if <span class="math-container">$S$</span> is uncountable, some <span class="math-container">$U_i$</span> contains an uncountable subset <span class="math-container">$S'$</span> of <span class="math-container">$S$</span>, and <span class="math-container">$\mu(U_i)\geq \sum_{x\in S'}\mu(x)=\infty$</span> since any uncountable sum of positive numbers diverges. Thus <span class="math-container">$\mu$</span> is not <span class="math-container">$\sigma$</span>-finite.</p> </blockquote> <p>My question is why do we have that <span class="math-container">$\mu(U_i) \geq \sum_{x\in S'} \mu(x)$</span> ? I am assuming that this inequality comes from subadditivity of <span class="math-container">$\mu$</span> but as I have understood it subadditivity is defined for countable unions, not for uncountable unions so I am confused as to how we arrive at an uncountable sum in this step.</p>
Michael Hardy
11,667
<p>Here I will take &quot;countable&quot; to mean finite or countably infinite.</p> <p>For <span class="math-container">$n\in\{1,2,3,\ldots\},$</span> if the set <span class="math-container">$A_n = \{ x : \mu(\{x\}) \ge 1/n \}$</span> were not countable then the space would not be <span class="math-container">$\sigma$</span>-finite.</p> <p>Since <span class="math-container">$(0,+\infty)= \bigcup_{n=1}^\infty \left(\frac 1 n, +\infty\right),$</span> the set <span class="math-container">$\bigcup_{n=1}^\infty A_n$</span> is the set of all atoms. This is a countable union of countable sets.</p>
2,202,724
<p><strong>Method 1:</strong></p> <p><img src="https://i.stack.imgur.com/vRVgX.png" alt="Method 1 image hyperlink"></p> <p><strong>Method 2:</strong></p> <p><img src="https://i.stack.imgur.com/pwww8.png" alt="Method 2 image hyperlink"></p> <p>In these two images, you will see that I have integrated $\sin^3 x$ using different techniques. As you can see I get different answers. I asked my teacher why this is and he said it is because the constants '$C$' are different for each one. </p> <p>Can someone please explain to me what that means? Also, Why does it vanish when we add limits? </p> <p>I know this is a relatively easy question for this site, but could you be wary I am only 16. So, could you make your answers simple enough for me to understand?</p> <p>Secondly, you'd've seen that I showed my working out in the images. I did this using word - WHICH TOOK A LIFETIME! Do you have any suggestions of apps, websites or literally anything that could speed up digitalising my working out for maths?</p> <p>Thanks, IB</p>
David K
139,123
<p>Your teacher was referring to a true fact that is worthwhile for you to know, although as it turns out, it does <em>not</em> apply to your two calculations.</p> <p>To understand what your teacher is talking about, we need to remember that $\int f(x)\,dx$ does not describe a single function, but rather a family of functions containing every function whose derivative is $f.$ Each member of that family of functions is an <em>antiderivative</em> or <em>primitive</em> of $f.$</p> <p>Since the notation for a family of functions like this is a bit complicated, we usually indicate the solution of an indefinite integral by writing down just one of its antiderivatives, that is, one representative from the family of functions that solves the integral. The $+C$ term is an acknowledgement that the choice of which function to write is arbitrary; it says that in order to say <em>which</em> of the antiderivatives of $f$ we have written, we just have to choose a value for the constant $C.$ A different choice will give us a different antiderivative, but it belongs to the same family of functions.</p> <p>It often happens that when you use two different methods to integrate a function, you end up with answers that look different, in much the same way that your two answers look different. In those cases, your teacher is correct: if you remove the $+C$ from each answer, the functions that remain differ by a constant amount, and that is the difference your teacher referred to. An example of this is in <a href="https://math.stackexchange.com/a/1659690/139123">this answer to another integration question</a></p> <p>In a definite integral of $f,$ again we can use any antiderivative of $f$ in the solution, but we must use the <em>same</em> antiderivative at both ends of the interval of integration. So whatever the amount is by which the antiderivative you chose is greater than the antiderivative someone else chose, the difference cancels out when you subtract the value at one end from the value at the other.</p> <p>In your particular integral, however, in your first attempt at the solution you simplified $[1 - (2\cos^2 x - 1)]$ to $-2\cos^2 x.$ That is incorrect. In fact, $$[1 - (2\cos^2 x - 1)] = 2 - 2\cos^2 x.$$ The missing term $2$ is why you are missing the term $-\cos x$ at the end. That is, your first result is simply <em>wrong,</em> not just using a "different constant $C$" than your second result.</p>
2,202,724
<p><strong>Method 1:</strong></p> <p><img src="https://i.stack.imgur.com/vRVgX.png" alt="Method 1 image hyperlink"></p> <p><strong>Method 2:</strong></p> <p><img src="https://i.stack.imgur.com/pwww8.png" alt="Method 2 image hyperlink"></p> <p>In these two images, you will see that I have integrated $\sin^3 x$ using different techniques. As you can see I get different answers. I asked my teacher why this is and he said it is because the constants '$C$' are different for each one. </p> <p>Can someone please explain to me what that means? Also, Why does it vanish when we add limits? </p> <p>I know this is a relatively easy question for this site, but could you be wary I am only 16. So, could you make your answers simple enough for me to understand?</p> <p>Secondly, you'd've seen that I showed my working out in the images. I did this using word - WHICH TOOK A LIFETIME! Do you have any suggestions of apps, websites or literally anything that could speed up digitalising my working out for maths?</p> <p>Thanks, IB</p>
user428838
428,838
<p>Integration is just opposite of differentiation so whenever we integrate without limits we add a constant C which woud vanish on differentiating it. You can always write a function f(x) as (f(x) +0) and on integrating it, the integration of 0 is a constant ( differentiation of constant is zero) so we write a constant always beside the main answer. You can match both your answers by trignometry and you would observe that more constants come out and your both answers are same. On putting limits we actually do (g(final) +C)-(g(initial)+ C) and the constants cancel out.</p>
4,422,824
<p><strong>Edit: This question involves derivatives, please read my prior work!</strong></p> <p>This question has me stumped.</p> <blockquote> <p>A car company wants to ensure its newest model can stop in less than 450 ft when traveling at 60 mph. If we assume constant deceleration, find the value of deceleration that accomplishes this.</p> </blockquote> <p>First, from the instructions, I believe...</p> <p><span class="math-container">$f'(x)=(5280/60)-ax=88-ax$</span></p> <p><span class="math-container">$f''(x)=-a$</span></p> <p>I also believe <span class="math-container">$f(x)={\int}f'(x)dx=88x-a{\int}x=88x-a\frac{x^2}{2}$</span>, because <code>a</code> is known to be constant.</p> <p>Where I'm lost is what comes next. I can compute <code>a</code> and <code>x</code> in terms of each other at <code>f(x)=450</code>, but this doesn't seem to get me closer to the answer. Neither does the fact that <span class="math-container">$f^{-1}(450)=x$</span>. What am I missing here? Thank you!</p>
Doug M
317,176
<p><span class="math-container">$a = \frac {dv}{dt}$</span> and <span class="math-container">$v = \frac {dx}{dt}$</span></p> <p>By the chain rule <span class="math-container">$a = \frac {dv}{dx}\frac {dx}{dt} = v\frac {dv}{dx}$</span></p> <p><span class="math-container">$\int a\ dx = \int v\ dv\\ ax = \frac 12 v^2\\ a = \frac {v^2}{2x}$</span></p> <p>But this is the minimum value of <span class="math-container">$a.$</span></p> <p><span class="math-container">$a \ge \frac {88^2}{900}$</span></p> <p>Alternative....</p> <p><span class="math-container">$a = \frac {dv}{dt}\\ v = \frac {dx}{dt}$</span></p> <p><span class="math-container">$v(t) = v_0 - at = 88 - at\\ x(t) = 88 t - \frac 12 a t^2$</span></p> <p>The vehicle comes to a stop at time <span class="math-container">$t = t^*$</span><br /> <span class="math-container">$v(t^*) = 0\\ t^* = \frac {88}{a}\\ x(t^*) = 88\cdot \frac {88}{a} - \frac 12 a \frac {(88)^2}{a^2} = \frac {(88)^2}{2a} &lt; 450\\ a &gt; \frac {88^2}{900}$</span></p>
180,296
<p>I need an algorithm to decide quickly in the worst case if a 20 digit integer is prime or composite.</p> <p>I do not need the factors.</p> <p>Is the fastest way still a prime factorization algorithm? Or is there a faster way given the above relaxation?</p> <p>In any case which algorithm gives the best worst case performance for a 20 digit prime?</p> <p><strong>Update:</strong></p> <p>Here is the simple method I started with:</p> <pre><code> int64 x = 981168724994134051LL; // prime int64 sq = int64(ceil(sqrt(x))); for(int64 j = 2; j &lt;= sq; j++) { if (x % j == 0) cout &lt;&lt; "fail" &lt;&lt; endl; } </code></pre> <p>It takes 9 seconds on my 3.8Ghz i7 3930K. I need to get it down by a factor of about 1000. Going to try a low end "primorial" sieve and see what that does.</p> <p><strong>Update 2:</strong></p> <p>I created a prime sieve using $2.3.5.7.11.13.17 = 510510 = c$ entries. And then searched for factors in blocks of 510510, disregarding factors that are divisible by one of the 7 mentioned primes by a lookup table. It actually made running time worst (11 seconds), I suspect because the memory access time is not worth it compared to the density of numbers cooprime to $(2,3,5,..,17)$</p>
Radu Titiu
37,056
<p>If $\overline{a}\cdot \overline{b}= \overline{1}$ then there is $k \in \mathbb{Z}$ such that $ab+kn=1$, therefore $\gcd(a,n)=1$. </p> <p>Conversely, let $a \in \mathbb{Z}$ such that $\gcd(a,n)=1$.Using Euclid's algorithm one can find $b,k \in \mathbb{Z}$ satisfying $ab+kn=1$, so $\overline{a}\cdot \overline{b}=\overline{1}$. </p>
180,296
<p>I need an algorithm to decide quickly in the worst case if a 20 digit integer is prime or composite.</p> <p>I do not need the factors.</p> <p>Is the fastest way still a prime factorization algorithm? Or is there a faster way given the above relaxation?</p> <p>In any case which algorithm gives the best worst case performance for a 20 digit prime?</p> <p><strong>Update:</strong></p> <p>Here is the simple method I started with:</p> <pre><code> int64 x = 981168724994134051LL; // prime int64 sq = int64(ceil(sqrt(x))); for(int64 j = 2; j &lt;= sq; j++) { if (x % j == 0) cout &lt;&lt; "fail" &lt;&lt; endl; } </code></pre> <p>It takes 9 seconds on my 3.8Ghz i7 3930K. I need to get it down by a factor of about 1000. Going to try a low end "primorial" sieve and see what that does.</p> <p><strong>Update 2:</strong></p> <p>I created a prime sieve using $2.3.5.7.11.13.17 = 510510 = c$ entries. And then searched for factors in blocks of 510510, disregarding factors that are divisible by one of the 7 mentioned primes by a lookup table. It actually made running time worst (11 seconds), I suspect because the memory access time is not worth it compared to the density of numbers cooprime to $(2,3,5,..,17)$</p>
Bill Dubuque
242
<p><strong>Hint</strong> $\ $ Over $\,\Bbb Z\,$ (or any <a href="http://en.wikipedia.org/wiki/B%C3%A9zout_domain" rel="nofollow">$\rm\color{#C00}{Bezout}$ domain</a> $\rm\,Z)\,$ we have $$\rm gcd(a,b) = 1\color{#C00}{\iff} \exists\, j,k\in Z\!:\ j\,a + k\,b = 1\iff \exists\, j\in Z\!:\ j\,a \equiv 1\,\ (mod\ b)$$ </p> <p><strong>Remark</strong> $\ $ The key fact is that not only do gcds exist, but they have <em>linear</em> form - the characteristic property of a Bezout domain. Said ideally, two-generated ideals are principal $\rm\:(a,b) = (c),\:$ which is equivalent to saying that $\rm\:c\:$ is a <em>linear</em> common divisor of $\rm\:a,b,\:$ i.e. a common divisor that is also a $\rm\,Z$-linear combination $\rm\, c = j\,a+k\,b\:$ for $\rm\:j,k\in Z.\:$ Therefore, by induction, every finitely generated ideal is principal $\rm\:(a_1,\ldots,\,a_n) = (a),\:$ where $\rm\:a = gcd(a_1,\ldots,a_n).$</p>
164,002
<p>When I am reading a mathematical textbook, I tend to skip most of the exercises. Generally I don't like exercises, particularly artificial ones. Instead, I concentrate on understanding proofs of theorems, propositions, lemmas, etc..</p> <p>Sometimes I try to prove a theorem before reading the proof. Sometimes I try to find a different proof. Sometimes I try to find an example or a counter-example. Sometimes I try to generalize a theorem. Sometimes I come up with a question and I try to answer it. </p> <p>I think those are good "exercises" for me.</p> <p><strong>EDIT</strong> What I think is a very good "excercise" is as follows:</p> <p>(1) Try to prove a theorem before reading the proof.</p> <p>(2) If you have no idea to prove it, take a look <strong>a bit</strong> at the proof.</p> <p>(3) Continue to try to prove it.</p> <p>(4) When you are stuck, take a look <strong>a bit</strong> at the proof.</p> <p>(5) Repeat (3) and (4) until you come up with a proof.</p> <p><strong>EDIT</strong> Another method I recommend rather than doing "homework type" exercises: Try to write a "textbook" on the subject. You don't have to write a real one. I tried to do this on Galois theory. Actually I posted "lecture notes" on Galois theory on an internet mathematics forum. I believe my knowledge and skill on the subject greatly increased.</p> <p>For example, I found <a href="https://math.stackexchange.com/questions/131757/a-proof-of-the-normal-basis-theorem-of-a-cyclic-extension-field">this</a> while I was writing "lecture notes" on Galois theory. I could also prove that any profinite group is a Galois group. This fact was mentioned in Neukirch's algebraic number theory. I found later that Bourbaki had this problem as an exercise. I don't understand its hint, though. Later I found someone wrote a paper on this problem. I made other small "discoveries" during the course. I was planning to write a "lecture note" on Grothendieck's Galois theory. This is an attractive plan, but has not yet been started.</p> <p><strong>EDIT</strong> If you want to have exercises, why not produce them yourself? When you are learning a subject, you naturally come up with questions. Some of these can be good exercises. At least you have the motivation not given by others. It is not homework. For example, I came up with the following question when I was learning algebraic geometry. I found that this was a good problem.</p> <p>Let $k$ be a field. Let $A$ be a finitely generated commutative algebra over $k$. Let $\mathbb{P}^n = Proj(k[X_0, ... X_n])$. Determine $Hom_k(Spec(A), \mathbb{P}^n)$.</p> <p>As I wrote, trying to find examples or counter-examples can be good exercises, too. For example, <a href="https://math.stackexchange.com/questions/133790/an-example-of-noncommutative-division-algebra-over-q-other-than-quaternion-alg">this</a> is a good exercise in the theory of division algebras.</p> <p><strong>EDIT</strong> Let me show you another example of self-exercises. I encountered the following problem when I was writing a "lecture note" on Galois theory.</p> <p>Let $K$ be a field. Let $K_{sep}$ be a separable algebraic closure of $K$. Let $G$ be the Galois group of $K_{sep}/K$.</p> <p>Let $A$ be a finite dimensional algebra over $K$. If $A$ is isomorphic to a product of fields each of which is separable over $K$, $A$ is called a finite etale algebra. Let $FinEt(K)$ be the category of finite etale algebra over $K$.</p> <p>Let $X$ be a finite set. Suppose $G$ acts on $X$ continuously. $X$ is called a finite $G$-set. Let $FinSets(G)$ be the category of finite $G$-sets.</p> <p><em>Then $FinEt(K)$ is anti-equivalent to $FinSets(G)$.</em></p> <p>This is a zero-dimensional version of the main theorem of Grothendieck's Galois theory. You can find the proof elsewhere, but I recommend you to prove it yourself. It's not difficult and it's a good exercise of Galois theory. <em>Hint</em>: Reduce it to the the case that $A$ is a finite separable extension of $K$ and X is a finite transitive $G$-set.</p> <p><strong>EDIT</strong> If you think this is too broad a question, you are free to add suitable conditions. This is a soft question.</p>
Qiaochu Yuan
232
<p>Depends on the textbook, I suppose. Some textbooks introduce a lot of material in the exercises that isn't developed in the main text. </p>
164,002
<p>When I am reading a mathematical textbook, I tend to skip most of the exercises. Generally I don't like exercises, particularly artificial ones. Instead, I concentrate on understanding proofs of theorems, propositions, lemmas, etc..</p> <p>Sometimes I try to prove a theorem before reading the proof. Sometimes I try to find a different proof. Sometimes I try to find an example or a counter-example. Sometimes I try to generalize a theorem. Sometimes I come up with a question and I try to answer it. </p> <p>I think those are good "exercises" for me.</p> <p><strong>EDIT</strong> What I think is a very good "excercise" is as follows:</p> <p>(1) Try to prove a theorem before reading the proof.</p> <p>(2) If you have no idea to prove it, take a look <strong>a bit</strong> at the proof.</p> <p>(3) Continue to try to prove it.</p> <p>(4) When you are stuck, take a look <strong>a bit</strong> at the proof.</p> <p>(5) Repeat (3) and (4) until you come up with a proof.</p> <p><strong>EDIT</strong> Another method I recommend rather than doing "homework type" exercises: Try to write a "textbook" on the subject. You don't have to write a real one. I tried to do this on Galois theory. Actually I posted "lecture notes" on Galois theory on an internet mathematics forum. I believe my knowledge and skill on the subject greatly increased.</p> <p>For example, I found <a href="https://math.stackexchange.com/questions/131757/a-proof-of-the-normal-basis-theorem-of-a-cyclic-extension-field">this</a> while I was writing "lecture notes" on Galois theory. I could also prove that any profinite group is a Galois group. This fact was mentioned in Neukirch's algebraic number theory. I found later that Bourbaki had this problem as an exercise. I don't understand its hint, though. Later I found someone wrote a paper on this problem. I made other small "discoveries" during the course. I was planning to write a "lecture note" on Grothendieck's Galois theory. This is an attractive plan, but has not yet been started.</p> <p><strong>EDIT</strong> If you want to have exercises, why not produce them yourself? When you are learning a subject, you naturally come up with questions. Some of these can be good exercises. At least you have the motivation not given by others. It is not homework. For example, I came up with the following question when I was learning algebraic geometry. I found that this was a good problem.</p> <p>Let $k$ be a field. Let $A$ be a finitely generated commutative algebra over $k$. Let $\mathbb{P}^n = Proj(k[X_0, ... X_n])$. Determine $Hom_k(Spec(A), \mathbb{P}^n)$.</p> <p>As I wrote, trying to find examples or counter-examples can be good exercises, too. For example, <a href="https://math.stackexchange.com/questions/133790/an-example-of-noncommutative-division-algebra-over-q-other-than-quaternion-alg">this</a> is a good exercise in the theory of division algebras.</p> <p><strong>EDIT</strong> Let me show you another example of self-exercises. I encountered the following problem when I was writing a "lecture note" on Galois theory.</p> <p>Let $K$ be a field. Let $K_{sep}$ be a separable algebraic closure of $K$. Let $G$ be the Galois group of $K_{sep}/K$.</p> <p>Let $A$ be a finite dimensional algebra over $K$. If $A$ is isomorphic to a product of fields each of which is separable over $K$, $A$ is called a finite etale algebra. Let $FinEt(K)$ be the category of finite etale algebra over $K$.</p> <p>Let $X$ be a finite set. Suppose $G$ acts on $X$ continuously. $X$ is called a finite $G$-set. Let $FinSets(G)$ be the category of finite $G$-sets.</p> <p><em>Then $FinEt(K)$ is anti-equivalent to $FinSets(G)$.</em></p> <p>This is a zero-dimensional version of the main theorem of Grothendieck's Galois theory. You can find the proof elsewhere, but I recommend you to prove it yourself. It's not difficult and it's a good exercise of Galois theory. <em>Hint</em>: Reduce it to the the case that $A$ is a finite separable extension of $K$ and X is a finite transitive $G$-set.</p> <p><strong>EDIT</strong> If you think this is too broad a question, you are free to add suitable conditions. This is a soft question.</p>
Matt E
221
<p>If your goal is to become a research mathematician, then doing exercises is important. Of course, there will be the rare person who can skip exercises with no detriment to their development, but (and I speak from the experience of roughly twenty years of involvement in training for research mathematics) such people are genuinely rare.</p> <p>The other kinds of exercises that you describe are also good, and you should do them too!</p> <p>The point of doing set exercises is to practice using particular techniques, so that you can recognize how and when to use them when you are confronted with technical obstacles in your research. </p> <p>In my own field, two books whose exercises I routinely recommend to my students are Hartshorne's <em>Algebraic geometry</em> text and Silverman's <em>Elliptic curves</em> text. The exercises at the end of Cassels and Frolich are also good.</p> <p>Atiyah and MacDonald also is known for its exercises.</p> <p>One possible approach (not recommended for everyone, though) is to postpone doing exercises if you find them too difficult (or too time-consuming, but this is usually equivalent to too difficult), but to return to them later when you feel that you understand the subject better. However, if upon return, you still can't fairly easily solve standard exercises on a topic you think you know well, you probably don't know the topic as well as you think you do.</p> <hr> <p>If your goal is <em>not</em> to become a research mathematician, then <em>understanding</em> probably has a different meaning and purpose, and your question will then possibly have a different answer, which I am not the right person to give.</p>
164,002
<p>When I am reading a mathematical textbook, I tend to skip most of the exercises. Generally I don't like exercises, particularly artificial ones. Instead, I concentrate on understanding proofs of theorems, propositions, lemmas, etc..</p> <p>Sometimes I try to prove a theorem before reading the proof. Sometimes I try to find a different proof. Sometimes I try to find an example or a counter-example. Sometimes I try to generalize a theorem. Sometimes I come up with a question and I try to answer it. </p> <p>I think those are good "exercises" for me.</p> <p><strong>EDIT</strong> What I think is a very good "excercise" is as follows:</p> <p>(1) Try to prove a theorem before reading the proof.</p> <p>(2) If you have no idea to prove it, take a look <strong>a bit</strong> at the proof.</p> <p>(3) Continue to try to prove it.</p> <p>(4) When you are stuck, take a look <strong>a bit</strong> at the proof.</p> <p>(5) Repeat (3) and (4) until you come up with a proof.</p> <p><strong>EDIT</strong> Another method I recommend rather than doing "homework type" exercises: Try to write a "textbook" on the subject. You don't have to write a real one. I tried to do this on Galois theory. Actually I posted "lecture notes" on Galois theory on an internet mathematics forum. I believe my knowledge and skill on the subject greatly increased.</p> <p>For example, I found <a href="https://math.stackexchange.com/questions/131757/a-proof-of-the-normal-basis-theorem-of-a-cyclic-extension-field">this</a> while I was writing "lecture notes" on Galois theory. I could also prove that any profinite group is a Galois group. This fact was mentioned in Neukirch's algebraic number theory. I found later that Bourbaki had this problem as an exercise. I don't understand its hint, though. Later I found someone wrote a paper on this problem. I made other small "discoveries" during the course. I was planning to write a "lecture note" on Grothendieck's Galois theory. This is an attractive plan, but has not yet been started.</p> <p><strong>EDIT</strong> If you want to have exercises, why not produce them yourself? When you are learning a subject, you naturally come up with questions. Some of these can be good exercises. At least you have the motivation not given by others. It is not homework. For example, I came up with the following question when I was learning algebraic geometry. I found that this was a good problem.</p> <p>Let $k$ be a field. Let $A$ be a finitely generated commutative algebra over $k$. Let $\mathbb{P}^n = Proj(k[X_0, ... X_n])$. Determine $Hom_k(Spec(A), \mathbb{P}^n)$.</p> <p>As I wrote, trying to find examples or counter-examples can be good exercises, too. For example, <a href="https://math.stackexchange.com/questions/133790/an-example-of-noncommutative-division-algebra-over-q-other-than-quaternion-alg">this</a> is a good exercise in the theory of division algebras.</p> <p><strong>EDIT</strong> Let me show you another example of self-exercises. I encountered the following problem when I was writing a "lecture note" on Galois theory.</p> <p>Let $K$ be a field. Let $K_{sep}$ be a separable algebraic closure of $K$. Let $G$ be the Galois group of $K_{sep}/K$.</p> <p>Let $A$ be a finite dimensional algebra over $K$. If $A$ is isomorphic to a product of fields each of which is separable over $K$, $A$ is called a finite etale algebra. Let $FinEt(K)$ be the category of finite etale algebra over $K$.</p> <p>Let $X$ be a finite set. Suppose $G$ acts on $X$ continuously. $X$ is called a finite $G$-set. Let $FinSets(G)$ be the category of finite $G$-sets.</p> <p><em>Then $FinEt(K)$ is anti-equivalent to $FinSets(G)$.</em></p> <p>This is a zero-dimensional version of the main theorem of Grothendieck's Galois theory. You can find the proof elsewhere, but I recommend you to prove it yourself. It's not difficult and it's a good exercise of Galois theory. <em>Hint</em>: Reduce it to the the case that $A$ is a finite separable extension of $K$ and X is a finite transitive $G$-set.</p> <p><strong>EDIT</strong> If you think this is too broad a question, you are free to add suitable conditions. This is a soft question.</p>
Amitesh Datta
10,467
<p>I think that the most important point in mathematics is to think about the subject for long periods of time. If you think about mathematics, then you will often develop intuition which is very important. Of course, if you think about something for long periods of time, then your memory of the material is better as well. </p> <p>Ultimately, the point is that people generally learn more by doing (compare active learning to passive learning). Of course, there are exceptions to every rule and you are the person who best understands your own strengths and weaknesses. The important point is to identify your weaknesses and work hard on them through a combination of active thinking and problem solving.</p>
3,078,176
<p>Given an equation in partial derivatives of the form <span class="math-container">$Af_x+Bf_y=\phi(x,y)$</span>, for example <span class="math-container">$$f_x-f_y=(x+y)^2$$</span> How do I know which change of coordinates is appropiate to solve the equation? In this example, the change of coordinates is <span class="math-container">$u=x+y$</span>, <span class="math-container">$v=x^2-y^2$</span>, why?</p>
lightxbulb
463,794
<p>Use <span class="math-container">$\cos^2\theta = 1 - \sin^2\theta$</span> and <span class="math-container">$\tan\theta = \frac{\sin\theta}{\cos\theta}=c$</span>. Then <span class="math-container">$\sin\theta = c\sqrt{1-\sin^2\theta}$</span>, then <span class="math-container">$\sin^2\theta = c^2-c^2\sin^2\theta$</span>, and finally <span class="math-container">$\sin\theta = \frac{c}{\sqrt{1+c^2}}$</span>. Note that when taking the roots I have taken the positive such since <span class="math-container">$\sin\theta&gt;0$</span> and also <span class="math-container">$\tan\theta &gt;0$</span>, which implies <span class="math-container">$\cos\theta &gt; 0$</span>.</p>
3,078,176
<p>Given an equation in partial derivatives of the form <span class="math-container">$Af_x+Bf_y=\phi(x,y)$</span>, for example <span class="math-container">$$f_x-f_y=(x+y)^2$$</span> How do I know which change of coordinates is appropiate to solve the equation? In this example, the change of coordinates is <span class="math-container">$u=x+y$</span>, <span class="math-container">$v=x^2-y^2$</span>, why?</p>
B. Goddard
362,009
<p>Draw a right triangle that shows <span class="math-container">$\tan \theta = 7/24.$</span> There are infinitely many, but choosing the one with legs <span class="math-container">$7$</span> and <span class="math-container">$24$</span> is a swell choice. Now use Pythagorean Theorem to find the length of the hypotenuse. Finding the values of any of the other trig functions should be easy now. </p> <p>The only snag is that you have to think about which quadrant you're in. But you have <span class="math-container">$\sin \theta$</span> and <span class="math-container">$\tan \theta$</span> both positive, so you're in the first quadrant. </p>
1,127,596
<p>I am trying to show that the value of $\int^\infty_0$$\int^\infty_0$ sin($x^2$+$y^2$) dxdy is $\frac{\pi}{4}$ using Fresnel integrals. I'm having trouble splitting apart the integrand in order to actually be able to use the Fresnel integrals. Any help is appreciated. </p> <p>Answer: $\int^\infty_0$ $\int^\infty_0$ sin($x^2$+$y^2$) = $\int^\infty_0$ $\int^\infty_0$ sin($x^2$)cos($y^2$)+cos($x^2$)sin($y^2$)dxdy</p> <p>= $\int^\infty_0$ $\frac{\sqrt2\pi}{4}$ cos$(y^2)$+$\frac{\sqrt2\pi}{4}$ sin$(y^2)$ dy (the $\frac{\sqrt2\pi}{4}$ comes from established Fresnel integrals values)</p> <p>= do the same thing for dy and then you get $\frac{\pi}{4}$ as desired.</p> <p>Now I'm working on doing this with polar coords.</p>
Random Jack
140,701
<p>Let's recall the definition of the convergence of an improper multiple integral (of the first kind). Assume that a function $f \colon \mathbb{R}^m \to \mathbb{R}$ is continuous almost everywhere. Consider the following sequence of sets $\{E_n\}_{n = 1}^\infty$:</p> <ol> <li>Each $E_n$ is an open Jordan-measurable subset of $\mathbb{R}^m$.</li> <li>$\overline{E_n} \subset E_{n + 1}$ and $\cup_{n = 1}^\infty E_n = \mathbb{R}^m.$</li> </ol> <p>Consider the corresponding sequence of Riemann integrals: $$ I_n = \int\limits_{E_n} f(x) dx, \quad n = 1, 2, \dots$$ If for every sequence $\{E_n\}_{n = 1}^\infty$, satisfying 1 and 2, there exists a finite limit $I = \lim_{n \rightarrow \infty} I_n$ independent of the choice of $\{E_n\}_{n = 1}^\infty$, then the improper multiple integral $$\int_\limits{\mathbb{R}^m}f(x)dx$$ converges (exists) and is equal to $I$. Otherwise (if this limit is infinite or does not exist), this integral diverges. To apply this definition to other domains $E \subset \mathbb{R}^m$ (in your case $E$ is the first quadrant) instead of $f$ we consider the following function $$F(x) = \begin{cases}f(x), &amp;x \in E,\\ 0, &amp;x \in \mathbb{R}^m \setminus E.\end{cases}$$ Then $$\int_\limits{E}f(x)dx \triangleq \int_\limits{\mathbb{R}^m}F(x)dx.$$ In your case, consider two sequences $\{E_n\}_{n = 1}^\infty$ and $\{E'_n\}_{n = 1}^\infty$ satisfying 1 and 2: $$ E_n = \{(x, y) \in \mathbb{R}^2 \mid |x| + |y| &lt; n\},\ E'_n = \{(x, y) \in \mathbb{R}^2 \mid x^2 + y^2 &lt; 2 \pi n\}, n \in \mathbb{N}.$$ For the first sequence we have: $$I_n = \iint\limits_{E_n}F(x, y)dxdy = \int_0^n dx\int_0^n\sin(x^2+y^2)dy = 2\int_0^n \sin{x^2} dx\int_0^n\cos{y^2}dy,$$ and hence (using Fresnel integrals) we have $\lim_{n \rightarrow \infty} I_n = \frac{\pi}{4}.$</p> <p>For the second sequence (using polar coordinates) we have: $$I_n = \iint\limits_{E'_n}F(x, y)dxdy = \int_0^\frac{\pi}{2} d\varphi\int_0^{\sqrt{2\pi n}} r\sin(r^2)dr = \frac{\pi}{4}(1 - \cos 2\pi n) = 0,$$ and hence $\lim_{n \rightarrow \infty} I_n = 0$, which means that this limit depends on the choice of a sequence and hence by definition this integral diverges.</p> <p>Also notice that for improper multple integrals there is no notion of conditional convergence because of the following theorem:</p> <blockquote> <p>Assume that a function $f \colon \mathbb{R}^m \to \mathbb{R}$ $(m \geq 2)$ is continuous almost everywhere. Then the following two integrals $$1. \int_\limits{\mathbb{R}^m}f(x)dx \qquad 2. \int_\limits{\mathbb{R}^m}|f(x)|dx $$ converge or diverge simultaneously.</p> </blockquote>
3,197,683
<p>Here is the theorem that I need to prove</p> <blockquote> <p>For <span class="math-container">$K = \mathbb{Q}[\sqrt{D}]$</span> we have</p> <p><span class="math-container">$$\begin{align}O_K = \begin{cases} \mathbb{Z}[\sqrt{D}] &amp; D \equiv 2, 3 \mod 4\\ \mathbb{Z}\left[\frac{1 + \sqrt{D}}{2}\right] &amp; D \equiv 1 \mod 4 \end{cases} \end{align}$$</span></p> </blockquote> <p>The theorem we need to use is this one that can be found in any generic number theory textbook.</p> <blockquote> <p>an element <span class="math-container">$\alpha\in K$</span> is an algebraic integer if and only if its minimal polynomial has coefficients in <span class="math-container">$\mathbb{Z}$</span>.</p> </blockquote> <p>I tried many avenues of attack but it is extremely hard to prove. How do I prove it?</p>
lonza leggiera
632,373
<p>You can find a proof in many number theory textbooks. In Hardy and Wright's classic <em>Introduction to the Theory of Numbers</em>, for instance, its Theorem 238 on <a href="https://archive.org/details/Hardy_and_Wright_-_Introduction_to_the_Theory_of_Numbers/page/n221" rel="nofollow noreferrer">p.207</a>.</p>
1,375,958
<p>I am looking for a bounded funtion $f$ on $\mathbb{R}_+$ satisfying $f(0)=0$, $f'(0)=0$ and with bounded first and second derivatives. My intitial idea has been to consider trigonometric functions or compositions of them, but I still haven't found an adequate one. Any ideas would be greatly appreciated.</p>
3SAT
203,577
<p>$$f(x)=\sin^2 (x)$$</p> <p>$$f(0)=0$$</p> <p>$$f'(x)=\cos(x)2\sin(x)=\sin(2x)$$</p> <p>$$f'(0)=0$$</p>
939,725
<p>Given that $a_0=2$ and $a_n = \frac{6}{a_{n-1}-1}$, find a closed form for $a_n$.</p> <p>I tried listing out the first few values of $a_n: 2, 6, 6/5, 30, 6/29$, but no pattern came out. </p>
Claude Leibovici
82,404
<p>I think that Semiclassical proposed a very nice solution rewriting $$a_n = \frac{6}{a_{n-1}-1}$$ $$\dfrac{1}{a_n+2}=\dfrac{1}{2}-\dfrac{3/2}{a_{n-1}+2}$$ So, let us define $$b_n=\dfrac{1}{a_n+2}$$ (with $b_0=\dfrac{1}{4}$); so the recurrence equation is simply $$b_n=\dfrac{1}{4}-\dfrac{3}{2}b_{n-1}$$ from which $$b_n=\frac{1}{20} \left(4+\left(-\frac{3}{2}\right)^n\right)$$ and then $$a_n=\frac{20}{4+\left(-\frac{3}{2}\right)^n}-2$$ as shown by Git Gud.</p>
3,177,343
<p>I have the following minimization problem in <span class="math-container">$x \in \mathbb{R}^n$</span></p> <p><span class="math-container">$$\begin{array}{ll} \text{minimize} &amp; \|x\|_2 - c^T x\\ \text{subject to} &amp; Ax = b\end{array}$$</span></p> <p>where <span class="math-container">$A \in \mathbb{R}^{m \times n}$</span> is right-invertible, <span class="math-container">$b \in \mathbb{R}^m$</span> and <span class="math-container">$c \in \mathbb{R}^n$</span>.</p> <p>I tried to solve this using Lagrange multipliers, but am unable to find a closed form solution for <span class="math-container">$x$</span>, because the derivative of <span class="math-container">$\|x\|_2$</span> with respect to <span class="math-container">$x$</span>, which is <span class="math-container">$\frac{x}{\|x\|_2}$</span>, contains <span class="math-container">$\sqrt{x^Tx}$</span> in the denominator.</p> <p>Any help would be appreciated.</p>
Reinhard Meier
407,833
<p>Using the derivatives of the Lagrange function <span class="math-container">$\mathcal{L}(x,\lambda) =\|x\|-c^Tx-\lambda^T(Ax-b),$</span> we get <span class="math-container">$$ \frac{x}{\|x\|} - c - A^T \lambda = 0 $$</span> Note that <span class="math-container">$\lambda\in\mathbb{R}^m.$</span> This results in <span class="math-container">$x = (c+A^T\lambda)\,\|x\|.$</span> We insert this <span class="math-container">$x$</span> in <span class="math-container">$Ax=b$</span> and we get <span class="math-container">$A(c+A^T\lambda)\,\|x\| = b$</span> or <span class="math-container">$$ AA^T\lambda = \frac{b}{\|x\|} - Ac $$</span> or <span class="math-container">$$ \lambda = \left(AA^T\right)^{-1}\left(\frac{b}{\|x\|} - Ac\right) $$</span> We put this <span class="math-container">$\lambda$</span> into <span class="math-container">$x = (c+A^T\lambda)\,\|x\|$</span> and we get <span class="math-container">$$ x = \left(c+A^T\left(AA^T\right)^{-1}\left(\frac{b}{\|x\|} - Ac\right)\right)\,\|x\| $$</span> or <span class="math-container">$$ x = \left(I-A^T\left(AA^T\right)^{-1}A\right) c \|x\| + A^T\left(AA^T\right)^{-1} b $$</span> We define <span class="math-container">$v$</span> as the projection of <span class="math-container">$c$</span> on the kernel of <span class="math-container">$A,$</span> which is <span class="math-container">$v = (I-A^T (AA^T)^{-1}A)c,$</span> and we define <span class="math-container">$x_0 =A^T\left(AA^T\right)^{-1} b,$</span> such that we get <span class="math-container">$$ x= v \|x\| + x_0 $$</span> We now have <span class="math-container">$$ \|x\|^2 = x^Tx = (v \|x\| + x_0)^T(v \|x\| + x_0) $$</span> It can easily been shown than <span class="math-container">$x_0^Tv = 0.$</span> Therefore, <span class="math-container">$$ (v^Tv-1)\|x\|^2 + x_0^Tx_0 = 0 $$</span> This is a quadratic equation for <span class="math-container">$\|x\|.$</span> We can solve this for <span class="math-container">$\|x\|$</span> and plug this <span class="math-container">$\|x\|$</span> into <span class="math-container">$x= v \|x\| + x_0.$</span></p> <p>I have only addressed the possibility to get a closed form for <span class="math-container">$x,$</span> but not the question if this <span class="math-container">$x$</span> is actually a valid solution of the problem. The resulting <span class="math-container">$x$</span> fulfills the necessary condition for solutions at differentiable points of the Lagrange function, but it does not necessarily fulfill the sufficient conditions. If we can only get complex solutions, this means that the problem is not bounded. Note also that the Lagrange function is not differentiable at <span class="math-container">$x=0$</span>, which means that this point must be addressed separately.</p>
123,269
<p>Consider the following differential equation:</p> <p>$$\begin{align*}&amp;\rho C_p\left(\frac{\partial T}{\partial t}\right)=k\left[\frac{\partial^2 T}{\partial x^2}\right]+\dot{q}\\ &amp;\text{at }x=0,\;\frac{\partial T}{\partial x}=0\\ &amp;\text{at }x=1,\frac{\partial T}{\partial x}=C_1(T(t,1)-C_2)\\ &amp;\text{at }t=0,T(0,x)=C_3 \end{align*}$$</p> <pre><code>c1 = -10; c2 = 10; c3 = 20; q[t, x] = 100000; heat = NDSolve[{1591920 D[u[t, x], t] == .87 D[u[t, x], x, x] + q[t, x], (D[u[t, x], x] /. x -&gt; 0) == 0, (D[u[t, x], x] /. x -&gt; 1) == c1 (u[t, 1] - c2), u[0, x] == c3}, u, {t, 0, 600}, {x, 0, 1}] </code></pre> <p>The solver doesn't work and gives the warning:</p> <blockquote> <p>NDSolve::ibcinc: Warning: boundary and initial conditions are inconsistent. >></p> </blockquote> <p>It is clear where the problem lies:</p> <pre><code>D[u[0, x], x] /. {x -&gt; 1} /. heat u[0.0, 1] - c2) /. heat </code></pre> <blockquote> <p>{0.}</p> <p>{10.}</p> </blockquote> <p>Causing <code>(D[u[t, x], x] /. x -&gt; 1) == c1 (u[t, 1] - c2)</code> to evaluate as false</p> <p>And rather than approach <code>c2</code>, the heat goes up: Note that editing <code>c1</code> does affect the graph, while editing <code>c2</code> does not.</p> <pre><code>Plot3D[Evaluate[u[t, x] /. heat], {t, 0, 600}, {x, 0, 1}, PlotRange -&gt; All, AxesLabel -&gt; {"t(s)", "x(m)"}, ColorFunction -&gt; "TemperatureMap"] </code></pre> <p><a href="https://i.stack.imgur.com/bDfjK.png" rel="noreferrer"><img src="https://i.stack.imgur.com/bDfjK.png" alt="plot"></a></p> <p>Why does this boundary directive refuse to be set, with the whole domain of <code>D[u[0, x], x]</code> returning <code>0</code>?</p> <p><a href="https://mathematica.stackexchange.com/questions/122079/problem-with-convection-heat-transfer-boundary-condition/122089?noredirect=1#comment333236_122089">Linked</a></p>
xzczd
1,871
<h1>Short Answer</h1> <p>Set</p> <pre><code>Method -&gt; {"MethodOfLines", "DifferentiateBoundaryConditions" -&gt; {True, "ScaleFactor" -&gt; 1}} </code></pre> <p>inside <code>NDSolve</code> will resolve the problem. It's not necessary to set <code>"ScaleFactor"</code> to <code>1</code>, it just needs to be a not-that-small positive number.</p> <h1>Long Answer</h1> <p>The answer for this problem is hidden in <a href="https://reference.wolfram.com/language/tutorial/NDSolveMethodOfLines.html#1306392612" rel="noreferrer">this obscure tutorial</a>. I'll try my best to retell it in a easier to understand way.</p> <p>Let's consider the following simpler initial-boundary value problem (IBVP) of heat equation that suffers from the same issue:</p> <p><span class="math-container">$$\frac{\partial u}{\partial t}=\frac{\partial^2 u}{\partial x^2}$$</span> <span class="math-container">$$u(0,x)=x(1-x)$$</span> <span class="math-container">$$u(t,0)=0,\ \frac{\partial u}{\partial x}\bigg|_{x=1}=0$$</span> <span class="math-container">$$t&gt;0,\ 0\leq x\leq 1$$</span></p> <p>Clearly <span class="math-container">$u(0,x)=x(1-x)$</span> and <span class="math-container">$\frac{\partial u}{\partial x}\bigg|_{x=1}=0$</span> is inconsistent. When you solve it with <code>NDSolve</code> / <code>NDSolveValue</code>, <code>ibcinc</code> warning will be spit out:</p> <pre><code>tend = 1; xl = 0; xr = 1; With[{u = u[t, x]}, eq = D[u, t] == D[u, x, x]; ic = u == (x - xl) (xr - x) /. t -&gt; 0; bc = {u == 0 /. x -&gt; xl, D[u, x] == 0 /. x -&gt; xr};] sol = NDSolveValue[{eq, ic, bc}, u, {t, 0, tend}, {x, xl, xr}] </code></pre> <blockquote> <p>NDSolveValue::ibcinc</p> </blockquote> <p>and further check shows the boundary condition (b.c.) <span class="math-container">$\frac{\partial u}{\partial x}\bigg|_{x=1}=0$</span> isn't satisfied at all:</p> <pre><code>Plot[D[sol[t, x], x] /. x -&gt; xr // Evaluate, {t, 0, tend}, PlotRange -&gt; All] </code></pre> <p><img src="https://i.stack.imgur.com/slyad.png" alt="Mathematica graphics"></p> <p>Why does this happen?</p> <p>The best way for explaination is re-implementing the method used by <code>NDSolve</code> in this case i.e. <em>method of lines</em>, all by ourselves.</p> <p>As mentioned in the document, method of lines is a numeric method that discretizes the partial diffential equation (PDE) in all but one dimension and then integrating the semi-discrete problem as a system of ordinary differential equations (ODEs) or differential algebraic equations (DAEs). Here I discretize the PDE with 2nd order centered difference formula:</p> <p><span class="math-container">$$f'' (x_i)\simeq\frac{f (x_{i}-h)-2 f (x_i)+f (x_{i}+h)}{h^2}$$</span></p> <pre><code>Clear@dx formula = eq /. {D[u[t, x], t] -&gt; u[x]'[t], D[u[t, x], x, x] -&gt; (u[x - dx][t] - 2 u[x][t] + u[x + dx][t])/dx^2} points = 5; dx = (xr - xl)/(points - 1); ode = Table[formula, {x, xl + dx, xr - dx, dx}] </code></pre> <blockquote> <pre><code>{u[1/4][t] == 16 (u[0][t] - 2 u[1/4][t] + u[1/2][t]), u[1/2][t] == 16 (u[1/4][t] - 2 u[1/2][t] + u[3/4][t]), u[3/4][t] == 16 (u[1/2][t] - 2 u[3/4][t] + u[1][t])} </code></pre> </blockquote> <p>I've chosen a very coarse grid for better illustration. The initial condition (i.c.) should also be discretized:</p> <pre><code>odeic = Table[ic /. u[t_, x_] :&gt; u[x][t] // Evaluate, {x, xl, xr, dx}] </code></pre> <blockquote> <pre><code>{u[0][0] == 0, u[1/4][0] == 3/16, u[1/2][0] == 1/4, u[3/4][0] == 3/16, u[1][0] == 0} </code></pre> </blockquote> <p>We still need to deal with b.c.. The Dirichlet b.c. doesn't need disrcetization:</p> <pre><code>bcnew1 = bc[[1]] /. u[t_, x_] :&gt; u[x][t] </code></pre> <blockquote> <pre><code>u[0][t] == 0 </code></pre> </blockquote> <p>The Neumann b.c. contains derivative of <span class="math-container">$x$</span> so we need to discretize it with one-sided difference formula:</p> <p><span class="math-container">$$f' (x_n)\simeq \frac{f (x_{n}-2h)-4 f (x_{n}-h)+3 f (x_n)}{2 h}$$</span></p> <pre><code>bcnew2 = bc[[2]] /. D[u[t, x_], x_] :&gt; (u[x - 2 dx][t] - 4 u[x - dx][t] + 3 u[x][t])/(2 dx) </code></pre> <blockquote> <pre><code>2 (u[1/2][t] - 4 u[3/4][t] + 3 u[1][t]) == 0 </code></pre> </blockquote> <p>"OK, 5 unknowns, 5 equations, we can now solve the system with any <em>ODE</em> solver! Just <em>as <code>NDSolve</code> does</em>!" Sadly you were wrong if you thought this statement is correct, because:</p> <ol> <li><p>Though <code>{ode, odeic, bcnew1, bcnew2}</code> is already a solvable system, it's not a set of ODEs, but <strong>DAE</strong>s. Notice here ODE refers to <strong>explicit</strong> ODE i.e. the coefficient of derivative term can't be <span class="math-container">$0$</span>. Clearly, <code>bcnew1</code> and <code>bcnew2</code> doesn't <strong>explicitly</strong> contain derivative of <span class="math-container">$t$</span>.</p></li> <li><p>Though <code>NDSolve</code> is able to handle this DAE system directly, it doesn't solve the PDE in this way by default. Instead, it'll try to transform the DAE system to an explicit ODE system, probably because its ODE solver is generally stronger than the DAE solver (at least now).</p></li> </ol> <p>So, how does <code>NDSolve</code> transform the DAE sytem to ODE system? "That's simple! Just eliminate some of the variables with <code>bcnew1</code> and <code>bcnew2</code>! " Yeah this is a possible method, but not the one implemented in <code>NDSolve</code>. <code>NDSolve</code> has chosen a method that may be rather unusual at first glance. It mixes the original b.c. with its 1st order derivative respects to <span class="math-container">$t$</span>. For our specific problem, the b.c. becomes:</p> <pre><code>odebc1 = D[#, t] + scalefactor1 # &amp; /@ bcnew1 </code></pre> <blockquote> <pre><code>scalefactor1 u[0][t] + u[0]'[t] == 0 </code></pre> </blockquote> <pre><code>odebc2 = D[#, t] + scalefactor2 # &amp; /@ bcnew2 </code></pre> <blockquote> <pre><code>2 scalefactor2 (u[1/2][t] - 4 u[3/4][t] + 3 u[1][t]) + 2 (u[1/2]'[t] - 4 u[3/4]'[t] + 3 u[1]'[t]) == 0 </code></pre> </blockquote> <p>Where <code>scalefactor1</code> and <code>scalefactor2</code> are properly chosen coefficients. </p> <p>It's not hard to notice this approach is systematic and easy to implement, and I guess that's the reason why <code>NDSolve</code> chooses it for transforming algebraic equation to ODE. Nevertheless, this method has its disadvantage. The generated b.c. is equivalent to the original b.c., <strong>only if the original b.c. is continuous and i.c. is consistent with the b.c.</strong>.</p> <p>Let's use <code>odebc1</code> as example. In our case, <code>bc[[1]]</code> is continuous, and it's consistent with <code>ic</code>, so it can be easily rebuilt from <code>odebc1</code>:</p> <pre><code>DSolve[{odebc1, odeic[[1]]}, u[0][t], t] </code></pre> <blockquote> <pre><code>{{u[0][t] -&gt; 0}} </code></pre> </blockquote> <p>However, if the i.c. is something that isn't consistent with <code>bc[[1]]</code>, for example <code>u[0][0] == 1</code>, the b.c. rebuilding from <code>odebc1</code> will become:</p> <pre><code>DSolve[{odebc1, u[0][0] == 1}, u[0][t], t] </code></pre> <blockquote> <pre><code>{{u[0][t] -&gt; E^(-scalefactor1 t)}} </code></pre> </blockquote> <p>It's no longer equivalent to <code>bc[[1]]</code>, but when <code>scalefactor1</code> is a large positive number, this b.c. will converge to the original one.</p> <p>Now here comes the key point. As stated in the document:</p> <blockquote> <p>With the default <code>"ScaleFactor"</code> value of <code>Automatic</code>, a scaling factor of <code>1</code> is used for Dirichlet boundary conditions and a scaling factor of <code>0</code> is used otherwise.</p> </blockquote> <p>i.e. <code>scalefactor2</code> will be set to <code>0</code>. Guess what b.c. will be rebuilt in this way?:</p> <pre><code>With[{scalefactor2 = 0}, DSolve[{D[#, t] + scalefactor2 # &amp; /@ bc[[2]], D[ic, x] /. x -&gt; xr}, D[u[t, x], x] /. x -&gt; xr, t]] </code></pre> <blockquote> <pre><code>{{Derivative[0, 1][u][t, 1] -&gt; -1}} </code></pre> </blockquote> <p>It's a completely different b.c..</p> <p>Back to the problem mentioned in the question, we can analyse its b.c. with the same method as above:</p> <pre><code>bcInQuestion = (D[u[t, x], x] /. x -&gt; 1) == c1 (u[t, 1] - c2); icInQuestion = u[0, x] == c3; With[{sf = 0}, DSolve[{D[#, t] + sf # &amp; /@ bcInQuestion, D[icInQuestion, x] /. x -&gt; 1}, D[u[t, x], x] /. x -&gt; 1, t]] /. u[0, _] :&gt; c3 </code></pre> <blockquote> <pre><code>{{Derivative[0, 1][u][t, 1] -&gt; -c1 c3 + c1 u[t, 1]}} </code></pre> </blockquote> <p>We see <code>c1</code> is still in the rebuilt b.c., while <code>c2</code> is completely killed by the derivation, that's why OP found "editing <code>c1</code> does affect the graph, while editing <code>c2</code> does not".</p> <p>OK, then why does <code>NDSolve</code> choose such a strange setting for scaling factor? The document explains as follows:</p> <blockquote> <p>There are two reasons that the scaling factor to multiply the original boundary condition is zero for boundary conditions with spatial derivatives. First, imposing the condition for the discretized equation is only a spatial approximation, so it does not always make sense to enforce it as exactly as possible for all time. Second, particularly with higher-order spatial derivatives, the large coefficients from one-sided finite differencing can be a potential source of instability when the condition is included. …</p> </blockquote> <p>but <strong>personally</strong> I think this design is just too lazy: why not make <code>NDSolve</code> choose a non-zero scaling factor at least when <code>ibcinc</code> warning pops up and the order of spatial derivatives isn't too high (this is usually the case, the differential order of <em>most</em> of PDEs in practise is no higher than <code>2</code> )?</p> <p>Anyway, now we know how to fix the issue. Just choose a positive scaling factor:</p> <pre><code>Clear[c2] c1 = -10; (*c2=10;*) c3 = 20; q[t, x] = 100000; heat = ParametricNDSolveValue[{1591920 D[u[t, x], t] == .87 D[u[t, x], x, x] + q[t, x], (D[u[t, x], x] /. x -&gt; 0) == 0, (D[u[t, x], x] /. x -&gt; 1) == c1 (u[t, 1] - c2), u[0, x] == c3}, u, {t, 0, 600}, {x, 0, 1}, c2, Method -&gt; {"MethodOfLines", "DifferentiateBoundaryConditions" -&gt; {True, "ScaleFactor" -&gt; 1}}]; Plot[heat[#][300, x] &amp; /@ Range[10, 50, 10] // Evaluate, {x, 0.9, 1}, PlotRange -&gt; All, AxesLabel -&gt; {"x(m)", "T"}] </code></pre> <p><img src="https://i.stack.imgur.com/iZFKJ.png" alt="Mathematica graphics"></p> <p>Now <code>c2</code> influences the solution.</p> <p>Young has also <a href="https://mathematica.stackexchange.com/a/123271/1871">solved the problem</a> with the new-in-<em>v10</em> <code>"FiniteElement"</code> method, I guess it's probably because b.c.s are imposed in a completely different way when <code>"FiniteElement"</code> method is chosen, but I'd like not to talk too much about it given I'm still in <em>v9</em> and haven't look into <code>"FiniteElement"</code>.</p>
189,068
<p>I am trying to derive a meaningful statistic from a survey where I have asked the person taking the survey to put objects in a certain order. The order the person puts the objects is compared to a correct order and I want to calculate the error.</p> <p>For example:</p> <p>Users order: 1, 3, 4, 5, 2</p> <p>Correct order: 3, 2, 1, 5, 4</p> <p>I have come up with a method of finding an error measure: For each object in the sequence I calculate how many places it is from the correct place (not wrapping on the ends) and divide by the number of alternative places. For the object 3 - this measure would be 1/4. For the object 2 - this measure would be 3/4. Then I average these measures and divide by the measure I would get in the case of the sequence that maximizes the number of total places of error.</p> <p>I have found I can calculate this maximum with the following algorithm:</p> <pre><code>// Number of places is 5 in example. int sum = 0; int i = 1; while(i&lt;NUMBER_OF_PLACES) { sum += 2*(NUMBER_OF_PLACES - i); i += 2; } </code></pre> <p>How would one write this as an equation? Is this the most meaningful measure I can make for figuring the error?</p>
Santosh Linkha
2,199
<p>At $\sin x + \cos x = {1 \over 2} \sin (2x)$ and $\sin (2x) $ over the interval $[0, {\pi \over 2}]$ looks like this,</p> <p><img src="https://i.stack.imgur.com/Ga4zO.gif" alt="enter image description here"></p> <p>The limits of $t$ are the values of $y$ axis, so you should split up the limits as $[0, {\pi \over 4}]$ and $[{\pi \over 4}, {\pi \over 2}]$. I think it would be nice to use <a href="http://en.wikipedia.org/wiki/Weierstrass_substitution" rel="nofollow noreferrer">Weierstrass substitution</a>.</p>
4,188,106
<p>Let's say we have the following diagram <span class="math-container">$$\require{AMScd}\begin{CD} 0 @&gt;&gt;&gt; A @&gt;&gt;&gt; B @&gt;&gt;&gt; C @&gt;&gt;&gt; 0\\ {} @V{\alpha}VV @V{\beta}VV @V{\gamma}VV {} \\ 0 @&gt;&gt;&gt; A' @&gt;&gt;&gt; B' @&gt;&gt;&gt; C' @&gt;&gt;&gt; 0 \end{CD}$$</span> where the top and bottom rows are short exact, so if <span class="math-container">$f: A \rightarrow B$</span> and <span class="math-container">$f': A' \rightarrow B'$</span> are injectives and <span class="math-container">$g:B \rightarrow C$</span>, <span class="math-container">$g': B' \rightarrow C'$</span> surjectives, then <span class="math-container">$\text{Im}(f)= \text{ker}(g)$</span> and <span class="math-container">$\text{Im}(f')= \text{ker}(g')$</span>. The short five lemma says that if the diagram is commutative and <span class="math-container">$\alpha$</span> and <span class="math-container">$\gamma$</span> are modules isomorphisms, then <span class="math-container">$\beta$</span> is an isomorphism.</p> <p>Is it true that if <span class="math-container">$A \simeq A'$</span> and <span class="math-container">$C \simeq C'$</span> then there's an isomorphism <span class="math-container">$\beta :B \rightarrow B'$</span> such that the diagram is commutative? The hint is: let <span class="math-container">$A = A' = \mathbb{Z}_2$</span>, <span class="math-container">$C = C' = \mathbb{Z}_2 \oplus \mathbb{Z}_2$</span> and <span class="math-container">$B = B' = \mathbb{Z}_4 \oplus \mathbb{Z}_2$</span> with <span class="math-container">$\alpha = \gamma = id$</span> and <span class="math-container">$$g(a \text{ mod }4, b\text{ mod }2) = (a\text{ mod }2,b\text{ mod }2) \\ g'(a\text{ mod }4, b\text{ mod }2) = (b\text{ mod }2,a\text{ mod }2).$$</span> Find two injective morphisms <span class="math-container">$f,f': \mathbb{Z}_2 \rightarrow \mathbb{Z}_4 \oplus \mathbb{Z}_2$</span> such that the top and bottom row are exact but there's no isomorphism <span class="math-container">$\beta: \mathbb{Z}_4 \oplus \mathbb{Z}_2 \rightarrow \mathbb{Z}_4 \oplus \mathbb{Z}_2$</span> such that the diagram is commutative.</p> <p>So for my proof, I started by finding</p> <p><span class="math-container">$$\text{ker}(g) = \text{ker}(g') = \left\{(0,0), (2,0) \right\},$$</span> here I mean <span class="math-container">$(0,0) = ([0],[0])$</span> the corresponding classes. So, the functions <span class="math-container">$f,f'$</span> that satisfies the conditions are <span class="math-container">$f=f'$</span> such that <span class="math-container">$f(0) = (0,0), f(1) = (2,0)$</span> (because <span class="math-container">$\text{Im}(f)=\text{ker}(g)$</span>). Now, suppose that there is an isomorphism <span class="math-container">$\beta: \mathbb{Z}_4 \oplus \mathbb{Z}_2 \rightarrow \mathbb{Z}_4 \oplus \mathbb{Z}_2$</span> such that the diagram is commutative. Then <span class="math-container">$\beta \circ f = f' \circ \alpha$</span> i.e., <span class="math-container">$\beta \circ f = f$</span>. So I should prove that <span class="math-container">$g=g' \circ \beta$</span> doesn't hold. Now my teacher said that since there're only <span class="math-container">$4$</span> possible <span class="math-container">$\beta$</span> isomorphisms, this can be made by hand (it is easy if <span class="math-container">$\beta = id$</span> because <span class="math-container">$g \neq g'$</span>). My question is, is there a way to complete the proof without checking the condition for all possible isomorphisms <span class="math-container">$\beta$</span>? Thanks</p>
Hagen von Eitzen
39,174
<p>Two much work.</p> <hr /> <p>Simply observe that there are pairs of short exact sequences such as <span class="math-container">$$ 0\to \Bbb Z_2\to\Bbb Z_4\to \Bbb Z_2\to 0$$</span> and <span class="math-container">$$ 0\to \Bbb Z_2\to\Bbb Z_2\oplus \Bbb Z_2\to \Bbb Z_2\to 0$$</span> where the middle terms are not isomorphic.</p>
3,902,418
<p>Ok, so I know that the mean value of a function, <span class="math-container">$f(x)$</span>, on the interval <span class="math-container">$[a,b]$</span> is given by (or defined by?) <span class="math-container">$$\frac{1}{b-a}\int_a^bf(x)~dx$$</span> but I have <span class="math-container">$2$</span> basic questions about this:</p> <p><strong><span class="math-container">$1$</span></strong>: From a purely mathematical point of view, does this have any practical use? Does it give us any extra weapon to add to our mathematical arsenal?</p> <p><strong><span class="math-container">$2$</span></strong>: How excatly can I interpret this as a mean value? I've seen the geometrical interpretation (looking at the area under the graph within the given interval), but I still don't understand how it links to a mean value. For a finite set of values I divide the sum of the values by however many values I have to obtain the mean, but in this case there are infinitely many values, so how is this a mean value?</p> <p>Thanks for your help.</p>
Ivo Terek
118,056
<p>Let <span class="math-container">$f:[a,b] \to \Bbb R$</span> be integrable. Consider the equidistant partition of <span class="math-container">$[a,b]$</span> into <span class="math-container">$n$</span> subintervals: <span class="math-container">$$\mathcal{P}_n: \quad a &lt; a + \frac{b-a}{n} &lt; a + 2\frac{(b-a)}{n} &lt; \cdots &lt; a + (n-1)\frac{(b-a)}{n} &lt; b.$$</span>The length of every subinterval is <span class="math-container">$(b-a)/n$</span>. Then by definition of integral, we have <span class="math-container">$$\lim_{n \to +\infty} \sum_{k=1}^n f\left(a+k\frac{(b-a)}{n}\right) \frac{b-a}{n} = \int_a^b f(x)\,{\rm d}x,$$</span> and hence <span class="math-container">$$\lim_{n\to +\infty} \frac{1}{n} \sum_{k=1}^n f\left(a+k\frac{(b-a)}{n}\right) = \frac{1}{b-a} \int_a^bf(x)\,{\rm d}x.$$</span>The thing inside the limit is the arithmethic mean of the values of <span class="math-container">$f$</span> on right-endpoints of the intervals in the partition. And the integral is a limit of means, i.e., a &quot;continuous&quot; mean of <span class="math-container">$f$</span> on the entire interval <span class="math-container">$[a,b]$</span>. Of course, you can do the same with left-endpoints.</p> <p>Such means are very frequent in Measure Theory, where one replaces the Riemann integral by a Lebesgue integral, and so on. For instance, <a href="https://en.wikipedia.org/wiki/Lebesgue_differentiation_theorem" rel="nofollow noreferrer">Lebesgue's differentiation theorem</a> gives the most general statement of what happens when, say, you have intervals of the form <span class="math-container">$[x_0,x_0+h]$</span> and want to see what happens with <span class="math-container">$$\lim_{h \to 0} \frac{1}{h} \int_{x_0}^{x_0+h} f(x)\,{\rm d}x.$$</span>The above is the mean of <span class="math-container">$f$</span> on the interval <span class="math-container">$[x_0,x_0+h]$</span> and, under suitable assumptions, this limit equals <span class="math-container">$f(x_0)$</span>, as one might expect.</p> <p>One tool for proving theorems of this type is the so-called <a href="https://en.wikipedia.org/wiki/Hardy%E2%80%93Littlewood_maximal_function" rel="nofollow noreferrer">Hardy-Littlewood maximal operator</a>, which is defined in terms of means. And so on and so on.</p> <p>Bottom line: keep studying analysis and you will see this appear everywhere.</p>
3,902,418
<p>Ok, so I know that the mean value of a function, <span class="math-container">$f(x)$</span>, on the interval <span class="math-container">$[a,b]$</span> is given by (or defined by?) <span class="math-container">$$\frac{1}{b-a}\int_a^bf(x)~dx$$</span> but I have <span class="math-container">$2$</span> basic questions about this:</p> <p><strong><span class="math-container">$1$</span></strong>: From a purely mathematical point of view, does this have any practical use? Does it give us any extra weapon to add to our mathematical arsenal?</p> <p><strong><span class="math-container">$2$</span></strong>: How excatly can I interpret this as a mean value? I've seen the geometrical interpretation (looking at the area under the graph within the given interval), but I still don't understand how it links to a mean value. For a finite set of values I divide the sum of the values by however many values I have to obtain the mean, but in this case there are infinitely many values, so how is this a mean value?</p> <p>Thanks for your help.</p>
Stefan Lafon
582,769
<p>I'd like to complement the answers from @Lee Mosher and @Ivo Terek with a data compression angle. That's the kind of angle that helped me develop an intuition behind the formal concept.</p> <p>Your function <span class="math-container">$f$</span> typically assumes several values on <span class="math-container">$[a,b]$</span>, but if you wanted to summarize it with <span class="math-container">${\bf one}$</span> number <span class="math-container">$v$</span>, then what number would you pick?</p> <p>There are several possibilities, and surely some numbers are better than others (for instance, you wouldn't pick a negative number if your function is known to assume positive values). Intuitively, you'd want a value that's right in the middle of the values assumed by <span class="math-container">$f$</span>. Among all possibilities, the mean value is a really good choice. The reason is that, among all the other possible values you could have picked, the mean minimizes the (quadratic) distance with the values taken by the function: <span class="math-container">$$v = \arg_u\min \int_a^b |f(x)-u|^2dx$$</span> In other words, it is the value that is closest to all values of <span class="math-container">$f$</span> on that interval.</p> <p>So, if you view your function as a signal, then an obvious way to compress (summarize) it would be to take the mean value. There are other interpretations for the mean value. In physics, this mean value represents the center of mass of a solid. In signal processing, it is the first mode of vibration of a complex wave (first Fourier mode). In statistics and probabilities, if you keep drawing a random variable (e.g. rolling a dice) and you look at the average of all the numbers you get, it will converge towards the mean of the distribution (law of large numbers). In data mining, the mean value of a data set is its centroid. And the list goes on and on...</p>
3,372,832
<blockquote> <p><strong>9)</strong> Is <span class="math-container">$$ \sum_{n=1}^\infty \delta_n \tag{7.10.1} $$</span> a well-defined distribution? Note, to be a well-defined distribution, its action on any test function should be a finite number. Provide an example of a function <span class="math-container">$f(x)$</span> whose derivative in the sense of distributions is <span class="math-container">$(7.10.1)$</span></p> </blockquote> <p>Hello, I want to find a distribution whose distributional derivative as the summation of the delta function (<span class="math-container">$\delta_1$</span> to <span class="math-container">$\delta_k$</span>). I find the distributional derivative of the summation of the shift of the Heaviside Function <span class="math-container">$H(x-a)$</span> is equal to the summation of the delta function. However, I have trouble of finding the convergence of the summation of the shift of the Heaviside function in the sense of the distribution. If I can find this convergence, and then , by the theorem, the derivative of the convergence is also the convergence of the summation of the delta function in the sense of distribution. </p>
Henno Brandsma
4,280
<p><span class="math-container">$\gamma[[0,1]] \subseteq D$</span> where <span class="math-container">$D$</span> is some closed disk. </p> <p>It's easy to see (e.g. by path-connectedness, as you say) that <span class="math-container">$\Bbb C\setminus D$</span> is connected. And as <span class="math-container">$\gamma^\ast=\Bbb C\setminus \gamma[[0,1]] \supseteq \Bbb C \setminus D$</span>, and the latter set is unbounded and connected, there is at least one unbounded component of <span class="math-container">$\gamma^\ast$</span>. That is all that Rudin's reasoning gives.</p> <p>To get the at most (or exactly) one, consider the component of <span class="math-container">$\infty$</span> in <span class="math-container">$\Bbb C \setminus \gamma[[0,1]]$</span> in the Riemann sphere <span class="math-container">$\Bbb C^\ast$</span>.</p>
3,330,938
<p>On Wikipedia page about Weierstrass factorization theorem one can find a sentence which mentions a generalized version so that it should work for meromorphic functions. I mean:</p> <blockquote> <p>We have sets of zeros and poles of function <span class="math-container">$f$</span>. How could we use that sets to find formula for <span class="math-container">$f$</span>.</p> </blockquote> <p>I think that it should be in the form of quotient of two entire functions.</p>
C. Brendel
529,214
<p>Given a meromorphic function <span class="math-container">$f$</span> with poles <span class="math-container">$(p_i)_{i\in I}$</span> and zeros <span class="math-container">$(z_i)_{i\in J}$</span> repeated according to multiplicity we have the corresponding weierstrass product for the poles <span class="math-container">$$\Pi(s)=s^{m}\prod_{i\in I} E_{n_i}\left(\frac{s}{p_i}\right)$$</span> where <span class="math-container">$m$</span> is the order of the pole at <span class="math-container">$s=0$</span> and <span class="math-container">$(n_i)_{i\in I}$</span> is a sequence of positive integers such that the weierstrass sum <span class="math-container">$$\sum_{i \in I} \left(\frac{r}{\vert p_i \vert}\right)^{1+n_i}$$</span> and thus the product converges. This product then defines a holomorphic function on the whole of <span class="math-container">$\mathbb{C}$</span> that has a zero of order <span class="math-container">$m$</span> at the point <span class="math-container">$s$</span> if and only if <span class="math-container">$f$</span> has a pole of order <span class="math-container">$m$</span> at <span class="math-container">$s$</span>. Thus the function <span class="math-container">$\Pi(s)\cdot f$</span> can be continued to a holomorphic function on all of <span class="math-container">$\mathbb{C}$</span> with exactly the zeros <span class="math-container">$(z_i)_{i\in J}$</span> of <span class="math-container">$f$</span> (again according to multiplicity). Due to the fact that this function is holomorphic, by Weierstrass we have a sequence of positive integers <span class="math-container">$(m_i)_{i\in J}$</span> and a zerofree holomorphic function <span class="math-container">$g$</span> such that: <span class="math-container">$$\Pi(s)\cdot f(s)=g(s)s^{m'} \prod_{i\in J} E_{m_i}\left(\frac{s}{z_i}\right)$$</span> where <span class="math-container">$m'$</span> denotes the order of the zero at <span class="math-container">$s=0$</span> of <span class="math-container">$f$</span>. All together we have that: <span class="math-container">$$f(s)=g(s) s^{m'-m}\frac{\prod_{i\in J} E_{m_i}\left(\frac{s}{z_i}\right)}{\prod_{i\in I}E_{n_i} \left(\frac{s}{p_i}\right)}$$</span></p>
3,840,253
<blockquote> <p>How to show that <span class="math-container">$\csc x - \csc\left(\frac{\pi}{3} + x \right) + \csc\left(\frac{\pi}{3} - x\right) = 3 \csc 3x$</span>?</p> </blockquote> <p>My attempt:<br /> <span class="math-container">\begin{align} LHS &amp;= \csc x - \csc\left(\frac{\pi}{3} + x\right) + \csc\left(\frac{\pi}{3} - x\right) \\ &amp;= \frac{1}{\sin x} - \frac{1}{\sin\left(\frac{\pi}{3} + x\right)} + \frac{1}{\sin\left(\frac{\pi}{3} -x\right)} \\ &amp;= \frac{\sin x \sin\left(\frac{\pi}{3} + x\right) + \sin\left(\frac{\pi}{3} + x\right) \sin\left(\frac{\pi}{3} - x\right) - \sin\left(\frac{\pi}{3} - x\right) \sin x }{\sin x \sin\left(\frac{\pi}{3} + x\right) \sin\left(\frac{\pi}{3} - x\right)} \\ &amp;=\frac{4}{\sin 3x}\left(\sin x \sin\left(\frac{\pi}{3} + x\right) + \sin\left(\frac{\pi}{3} + x\right) \sin\left(\frac{\pi}{3} - x\right) - \sin\left(\frac{\pi}{3} - x\right) \sin x\right) \\ &amp;=\frac{4}{\sin 3x}\left(\sin x \sin\left(\frac{\pi}{3} + x\right) - \sin\left(\frac{\pi}{3} - x\right) \left(\sin x - \sin\left(\frac{\pi}{3} + x\right)\right)\right) \\ &amp;=\frac{4}{\sin 3x}\left(\sin x \sin\left(\frac{\pi}{3} + x\right) - \sin\left(\frac{\pi}{3} - x\right) \left(2\sin\frac{-\pi}{6}\cos\left(x + \frac{\pi}{6}\right)\right)\right) \\ &amp;=\frac{4}{\sin 3x}\left(\sin x \sin\left(\frac{\pi}{3} + x\right) + \sin\left(\frac{\pi}{3} - x\right) \cos\left(x + \frac{\pi}{6}\right)\right) \\ \end{align}</span> How should I proceed? Or did I make some mistakes somewhere? Thanks in advance.</p>
player3236
435,724
<p>Using the identities</p> <p><span class="math-container">$$\sin A \sin B = \frac12 (\cos (A-B) - \cos (A+B))$$</span> <span class="math-container">$$\sin A \cos B = \frac12 (\sin (A+B) + \sin (A-B))$$</span></p> <p>we have <span class="math-container">\begin{align} &amp;\phantom{=}\sin x \sin\left(\frac{\pi}{3} + x\right) + \sin\left(\frac{\pi}{3} - x\right) \cos\left(x + \frac{\pi}{6}\right)\\&amp;=\frac12\left(\cos \left(-\frac\pi3\right)-\cos\left(2x+\frac\pi3\right)+\sin\frac\pi2+\sin\left(\frac\pi6-2x\right)\right)\\ &amp;=\frac 12\left(\frac12+1-\cos\left(2x+\frac\pi3\right)+\cos\left(\frac\pi2 - \left(\frac\pi6-2x\right)\right)\right)\\ &amp;=\frac34+\frac 12\left(-\cos\left(2x+\frac\pi3\right)+\cos\left(2x+\frac\pi3\right)\right)\\ &amp;=\frac34 \end{align}</span></p>
4,082,588
<blockquote> <p><strong>Definition:</strong> <span class="math-container">$\beta X$</span> is the Stone-Čech compactification of <span class="math-container">$X$</span>.</p> </blockquote> <blockquote> <p><strong>Theorem A:</strong> If <span class="math-container">$K$</span> is a compact Hausdorff space and <span class="math-container">$f\colon X \to K$</span> is<br /> continuous, there is a continuous <span class="math-container">$F: \beta X \to K$</span> such that <span class="math-container">$F \circ e = f$</span>, where <span class="math-container">$e\colon X\to\beta X$</span> is an embedding into a compact Hausdorff space.</p> </blockquote> <blockquote> <p>Show that <span class="math-container">$\left|\beta\mathbb{N}\right|\geq\left|\beta\mathbb{Q}\right|$</span>.</p> </blockquote> <p>Let <span class="math-container">$f\colon\mathbb{N}\to\mathbb{Q}$</span> be a bijection. As <em>any</em> function from the discrete topology is continuous (<span class="math-container">$\mathbb{N}$</span> with the relative topology from <span class="math-container">$\mathbb{R}_\text{std.}$</span> is the discrete topology), we can enlarge the range. Therefore, we can enlarge the range to <span class="math-container">$\beta\mathbb{Q}$</span>, which is a compact Hausdorff space, so that <span class="math-container">$f\colon\mathbb{N}\to\beta\mathbb{Q}$</span> is continous. By <strong>Theorem A</strong> we can extend <span class="math-container">$f$</span> uniquely to to a continuous function <span class="math-container">$\beta f\colon\beta\mathbb{N}\to\beta\mathbb{Q}$</span>.</p> <p>I need to show that <span class="math-container">$\beta f\colon\beta\mathbb{N}\to\beta\mathbb{Q}$</span> is surjective, i.e. <span class="math-container">$\beta f[\beta\mathbb{N}]=\beta\mathbb{Q}$</span>. As for any mapping <span class="math-container">$\beta f[\beta\mathbb{N}]\subseteq\beta\mathbb{Q}$</span>, it's enough to show that <span class="math-container">$\beta f[\beta\mathbb{N}]\supseteq\beta\mathbb{Q}$</span>.</p>
Alessandro Codenotti
136,041
<p>A different approach which doesn't give an explicit bijection is to use the characterization of <span class="math-container">$\beta\Bbb N$</span> as the space of ultrafilters on <span class="math-container">$\Bbb N$</span>, which are <span class="math-container">$2^{2^{|\Bbb N|}}$</span>, together with the fact that <span class="math-container">$\beta\Bbb Q$</span> is a separable Hausdorff space, hence has also cardinality at most <span class="math-container">$2^{2^{|\Bbb Q|}}$</span>.</p>
4,082,588
<blockquote> <p><strong>Definition:</strong> <span class="math-container">$\beta X$</span> is the Stone-Čech compactification of <span class="math-container">$X$</span>.</p> </blockquote> <blockquote> <p><strong>Theorem A:</strong> If <span class="math-container">$K$</span> is a compact Hausdorff space and <span class="math-container">$f\colon X \to K$</span> is<br /> continuous, there is a continuous <span class="math-container">$F: \beta X \to K$</span> such that <span class="math-container">$F \circ e = f$</span>, where <span class="math-container">$e\colon X\to\beta X$</span> is an embedding into a compact Hausdorff space.</p> </blockquote> <blockquote> <p>Show that <span class="math-container">$\left|\beta\mathbb{N}\right|\geq\left|\beta\mathbb{Q}\right|$</span>.</p> </blockquote> <p>Let <span class="math-container">$f\colon\mathbb{N}\to\mathbb{Q}$</span> be a bijection. As <em>any</em> function from the discrete topology is continuous (<span class="math-container">$\mathbb{N}$</span> with the relative topology from <span class="math-container">$\mathbb{R}_\text{std.}$</span> is the discrete topology), we can enlarge the range. Therefore, we can enlarge the range to <span class="math-container">$\beta\mathbb{Q}$</span>, which is a compact Hausdorff space, so that <span class="math-container">$f\colon\mathbb{N}\to\beta\mathbb{Q}$</span> is continous. By <strong>Theorem A</strong> we can extend <span class="math-container">$f$</span> uniquely to to a continuous function <span class="math-container">$\beta f\colon\beta\mathbb{N}\to\beta\mathbb{Q}$</span>.</p> <p>I need to show that <span class="math-container">$\beta f\colon\beta\mathbb{N}\to\beta\mathbb{Q}$</span> is surjective, i.e. <span class="math-container">$\beta f[\beta\mathbb{N}]=\beta\mathbb{Q}$</span>. As for any mapping <span class="math-container">$\beta f[\beta\mathbb{N}]\subseteq\beta\mathbb{Q}$</span>, it's enough to show that <span class="math-container">$\beta f[\beta\mathbb{N}]\supseteq\beta\mathbb{Q}$</span>.</p>
Henno Brandsma
4,280
<p>If you want to be more precise:</p> <p>Let <span class="math-container">$(e_1, \beta \Bbb N)$</span> be the Stone-Čech compactification of <span class="math-container">$\Bbb N$</span>, and <span class="math-container">$(e_2, \beta \Bbb Q)$</span> that of <span class="math-container">$\Bbb Q$</span>.</p> <p>So indeed take any bijection <span class="math-container">$f: \Bbb N \to \Bbb Q$</span>. This is trivially continuous and indeed <span class="math-container">$f':\Bbb N \to \beta \Bbb Q$</span> given by <span class="math-container">$f' = e_2 \circ f$</span> is also continuous. Note that <span class="math-container">$f'[\Bbb N] = e_2[\Bbb Q]$</span> which is by definition (of compactification) a dense subspace of <span class="math-container">$\Bbb Q$</span>.</p> <p>So Thm A gives us <span class="math-container">$\beta f': \beta \Bbb N \to \beta \Bbb Q$</span> so that <span class="math-container">$$\beta f' \circ e_1 =f'$$</span> as functions on <span class="math-container">$\Bbb N$</span>.</p> <p>Now consider <span class="math-container">$\beta f' [\beta \Bbb N]$</span>. It is compact so closed in the Hausdorff space <span class="math-container">$\beta \Bbb Q$</span>. On the other hand, applying what we know so far:</p> <p><span class="math-container">$$\beta f'[e_1[\Bbb N]] = f'[\Bbb N] = e_2[f[\Bbb N]] = e_2[\Bbb Q]$$</span></p> <p>we get that <span class="math-container">$\beta f'[\beta \Bbb N]$</span> contains the dense set <span class="math-container">$e_2[\Bbb Q]$</span>. And the only set that is closed <em>and</em> dense (in <span class="math-container">$\beta \Bbb Q$</span>) is <span class="math-container">$\beta \Bbb Q$</span>. It follows that <span class="math-container">$\beta f'$</span> is onto and so <span class="math-container">$|\beta \Bbb Q| \le |\beta\Bbb N|$</span>.</p> <hr /> <p>It's easier to &quot;think of&quot; <span class="math-container">$\beta \Bbb Q$</span> and <span class="math-container">$\beta \Bbb N$</span> as supersets of <span class="math-container">$\Bbb N$</span> resp. <span class="math-container">$\Bbb Q$</span>, and theorem A being about &quot;real&quot; extension: and then the proof would just be: <span class="math-container">$f$</span>, the bijection, which is then indeed &quot;the same&quot; as <span class="math-container">$f: \Bbb N \to \beta \Bbb Q$</span> (codomain extension) and <span class="math-container">$\beta f : \beta \Bbb N \to \beta \Bbb Q$</span> a real extension and then <span class="math-container">$\Bbb N$</span> is dense in <span class="math-container">$\beta \Bbb N$</span> and <span class="math-container">$\beta f[\Bbb N] = \Bbb Q$</span> is then dense and the same argument applies (<span class="math-container">$\beta f[\beta \Bbb N]$</span> being closed and dense). But here we have to more precise than that in the actual proof, as I gave above, which uses the embeddings and definitions of a compactification. But the idea is easier to grasp in such a concrete &quot;subspaces setting&quot;, as it were.</p>
3,566,469
<p>I am confused by a discussion with a colleague. The discussion is about the period of a periodic function.</p> <p>For example, the periodic function <span class="math-container">$$f(x)=\sin(x), \quad x\in (0,\infty)$$</span> has period <span class="math-container">$2\pi$</span>. If I change the scale and build the function, <span class="math-container">$$g(x)=\sin(\ln x),\quad x\in (0,\infty)$$</span> is this new function, g, periodic? If it is, what is the period?</p> <p><strong>EDIT</strong></p> <p>I will clarify my point. If I change the scale of the function <span class="math-container">$g$</span>, let's say, <span class="math-container">$\ln x =u$</span> then I will have function <span class="math-container">$$h(u)=\sin u, \quad u\in \mathbb R$$</span> and now <span class="math-container">$h$</span> is periodic on <span class="math-container">$u\in \mathbb R $</span>. </p> <p>So, my point is can I say that <span class="math-container">$g$</span> is not periodic in <span class="math-container">$x$</span>-domain but it is in <span class="math-container">$\log$</span>-domain?</p>
mathcounterexamples.net
187,663
<p><span class="math-container">$g$</span> is not periodic as the difference between two consecutive roots is unbounded as we consider the roots going to <span class="math-container">$\infty$</span>.</p>
3,805,089
<p>I have directional vectors <span class="math-container">$a, b, c, d$</span> in vector 2 space as seen in the images below. Unfortunately I don't have the sufficient vocabulary to explain this in more mathematical terms. In rough terms I need to check if vector <span class="math-container">$c$</span> and <span class="math-container">$d$</span> are &quot;on the same side in between&quot; vector <span class="math-container">$a$</span> and <span class="math-container">$b$</span> as in the first and second image, or they are not as in the third image. How would I do this?</p> <p><a href="https://i.stack.imgur.com/WjgAs.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WjgAs.jpg" alt="enter image description here" /></a></p> <p><a href="https://i.stack.imgur.com/dp2Rs.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dp2Rs.jpg" alt="enter image description here" /></a></p> <p><a href="https://i.stack.imgur.com/2PC1l.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2PC1l.jpg" alt="enter image description here" /></a></p>
Mithrandir
793,719
<p>Find <span class="math-container">$\angle ba$</span>, <span class="math-container">$\angle ca$</span>, and <span class="math-container">$\angle $</span>da. If <span class="math-container">$|\angle ba|$</span> is greater than both <span class="math-container">$|\angle ca|$</span>| and <span class="math-container">$ |\angle da|$</span>, both vectors are in the positive cone.</p>
2,972,085
<p><a href="https://i.stack.imgur.com/pcOfx.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/pcOfx.jpg" alt="enter image description here"></a></p> <p>My friend show me the diagram above , and ask me </p> <p>"What is the area of a BLACK circle with radius of 1 of BLUE circle?"</p> <p>So, I solved it by algebraic method. <span class="math-container">$$$$</span></p> <p>Let center of <span class="math-container">$\color{black}{BLACK}$</span> circle be <span class="math-container">$(0,0)$</span>.</p> <p>We can set, </p> <p><span class="math-container">$x^2 + (y-R)^2 = R^2$</span> , where <span class="math-container">$R$</span> means radius of <span class="math-container">$\color{red}{RED}$</span> circle.</p> <p><span class="math-container">$(x-p)^2 + (y-r)^2 = r^2 $</span>, where <span class="math-container">$(p,r)$</span> means center of <span class="math-container">$\color{blue}{BLUE}$</span> circle. <span class="math-container">$$$$</span> These can imply</p> <p><span class="math-container">$ 2R=r+ \sqrt{p^2 + r^2}$</span></p> <p><span class="math-container">$p^2 + (R-r)^2 = (R+r)^2 $</span></p> <p>So, </p> <p><span class="math-container">$ 2r=R$</span></p> <p><span class="math-container">$$$$</span></p> <p>But he wants not algebraic but <strong>Geometrical Method.</strong></p> <p>How can I show <span class="math-container">$ 2r=R$</span> with <strong>Geometrical Method</strong>?</p> <p>Really thank you.</p> <p><span class="math-container">$$$$</span></p> <p>(Actually I constructed the diagram with algebraic methed, </p> <p>but I'd like to know how construct this whit Geometrical method.)</p>
Federico
180,428
<p>I you perform a circular inversion w.r.t the black circle, the red circle becomes the red tangent line in the picture below, while the blue circle gets reflected to another circle, tangent to the black circle, the black line and the red line. The diameter of this new circle must be <span class="math-container">$1$</span>. Therefore <span class="math-container">$\overline{AB}=1+1=2$</span> and <span class="math-container">$\overline{AC}=1/\overline{AB}=1/2$</span>.</p> <p><a href="https://i.stack.imgur.com/vXaCx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vXaCx.png" alt="enter image description here"></a></p>
964,372
<p>I have a general question.</p> <p>If there is a matrix which is inverse and I multiply it by other matrixs which are inverse. Will the result already be reverse matrix?</p> <p>My intonation says is correct, but I'm not sure how to prove it.</p> <p>Any ideas? Thanks.</p>
symmetricuser
125,084
<p>For such an arrangement for $n$ people, there are two cases: $n$ is by itself or $n$ is paired with someone.</p> <p>For the first case, if you remove $n$, then it's just an arrangement for $n-1$ people, and so, there are $A_{n-1}$ arrangements where $n$ is by itself.</p> <p>Moving on to the second case, suppose that $n$ is paired off with $i$. Then, removing $n$ and $i$, it is a valid arrangement for $n-2$ people, and so, there are $A_{n-2}$ arrangements of $n$ being paired off with $i$. Since there are $n-1$ choices for $i$, you get the recursion $$A_n = A_{n-1} + (n-1)A_{n-2}.$$</p>
1,828,042
<p>This is my first question on this site, and this question may sound disturbing. My apologies, but I truly need some advice on this.</p> <p>I am a sophomore math major at a fairly good math department (top 20 in the U.S.), and after taking some upper-level math courses (second courses in abstract algebra and real analysis, differential geometry, etc), I can say that I genuinely like math, and if I have A BIT chance to succeed, I will go to graduate school and choose math research as my career.</p> <p>However, this is exactly the thing that I am afraid of. My grades on the courses are mediocre (my GPA for math courses is around 3.7), and for the courses I got A's, I had to work very hard, much harder than others to get the same result, and I often get confused in many of the classes, while the others understand the material quickly and could answer professor's questions, and at the same time I didn't even understand what the professor was really asking. I really wonder, if I have to work hard even on undergraduate courses, does that mean I am not naturally smart enough for more advanced math, especially compared to everyone else in my class? Can I even survive graduate level math if I even sometimes struggle with undergraduate courses? I always believe that adequate mathematicians could do well in their undergraduate courses easily. In my case, even if I work very hard, I forget definitions/theorems easily and then of course forget how to use them to solve problems.</p> <p>Is it still worth to try if I am significantly behind the regular level and have to work hard even for undergraduate courses, providing that there are a lot of smart people who can understand them instantly. This feeling hurts me a lot, especially when I am struggling with something in math, I always feel I am a useless trash and ask myself why I am so stupid?</p> <p>I thought about talking to my professors about this issue, but I find this too embarrassing to start. I am really afraid that if I ask them this question, they may tell me the truth in person that "you are really not smart enough to go to graduate school".</p> <p>So how can I tell if it is still worth for me to think about this path, or I should realize that I have no chance to succeed and give up now? I appreciate encouraging comments, but please, please be honest on this case because it is really important for my future plan. Thanks again for your advice, and I am really grateful.</p>
MathematicsStudent1122
238,417
<p>The answers here are rather idealistic. They seem to be based more on trite cliches rather than concrete reasoning or evidence. </p> <p>The fact is, academia is competitive and jobs are scarce. Your grades matter. Your performance relative to your peers matters. Being passionate at math or being interested in the subject is a necessary but insufficient condition to succeed in graduate school. </p> <p>I'm going to be brutally honest with you: do yourself a favour and don't pursue graduate school. I'm sorry if that's your dream, but we must have a sense of realism. It's nothing more than several years of daunting work and, in return, you get to call yourself a "mathematician". Job prospects? Salary? Unexceptional, if you're lucky. </p> <p>Learn to program, and get a job in that field; people who are strong at math are almost invariably good at programming. If you're struggling and genuinely think you're incapable, I'm not going to lie to you and say that passion and perseverance is necessarily going to fix everything; it might, of course, but that's a conclusion you must reach, perhaps with the help of your professors. </p> <p>I'm not saying any of this because I think you lack the talent; I'm saying it because it's the advice I'd give to any friend, unless he or she is genuinely a genius. Indeed, I'm making this same decision myself. </p> <p>This answer may well be down-voted, and that's fine; it's simply a consequence of this site's demographics. Responses to this sort of question will invariably be opinionated, but, nevertheless, it is an important one. </p>
2,886,460
<blockquote> <p>Let $\omega$ be a complex number such that $\omega^5 = 1$ and $\omega \neq 1$. Find $$\frac{\omega}{1 + \omega^2} + \frac{\omega^2}{1 + \omega^4} + \frac{\omega^3}{1 + \omega} + \frac{\omega^4}{1 + \omega^3}.$$</p> </blockquote> <p>I have tried combining the first and third terms &amp; first and last terms. Here is what I have so far:</p> <blockquote> <p>\begin{align*} \frac{\omega}{1 + \omega^2} + \frac{\omega^2}{1 + \omega^4} + \frac{\omega^3}{1 + \omega} + \frac{\omega^4}{1 + \omega^3} &amp;= \frac{\omega}{1 + \omega^2} + \frac{\omega^4}{1 + \omega^3} + \frac{\omega^2}{1 + \omega^4} + \frac{\omega^3}{1 + \omega} \\ &amp;= \dfrac{\omega(1+\omega^3) + \omega^4(1+\omega^2)}{(1+\omega^2)(1+\omega^3)} + \dfrac{\omega^2(1+\omega) + \omega^3(1+\omega^4)}{(1+\omega^4)(1+\omega)} \\ &amp;= \dfrac{\omega + 2\omega^4 +\omega^6}{1+\omega^2 + \omega^3 + \omega^5} + \dfrac{\omega^2 + 2\omega^3 + \omega^7}{1+\omega + \omega^4 + \omega^5} \\ &amp;= \dfrac{2\omega + 2\omega^4}{2+\omega^2 + \omega^3} + \dfrac{2\omega^2 + 2\omega^3}{2+\omega+\omega^4} \end{align*}</p> </blockquote> <p>OR</p> <blockquote> <p>\begin{align*} \frac{\omega}{1 + \omega^2} + \frac{\omega^2}{1 + \omega^4} + \frac{\omega^3}{1 + \omega} + \frac{\omega^4}{1 + \omega^3} &amp;= \frac{\omega}{1 + \omega^2} + \frac{\omega^3}{1 + \omega} + \frac{\omega^4}{1 + \omega^3} + \frac{\omega^2}{1 + \omega^4} \\ &amp;= \dfrac{\omega(1+\omega) + \omega^3(1+\omega^2)}{(1+\omega)(1+\omega^2)} + \dfrac{\omega^2(1+\omega^3) + \omega^4(1+\omega^4)}{(1+\omega^3)(1+\omega^4)} \\ &amp;= \dfrac{\omega + \omega^2 + \omega^3 + \omega^5}{1+\omega + \omega^2 + \omega^3} + \dfrac{\omega^2 + \omega^4 + \omega^5 + \omega^8}{1 + \omega^3 + \omega^4 + \omega^7} \\ &amp;= \dfrac{2\omega+\omega^2+\omega^3}{1+\omega+\omega^2+\omega^4} + \dfrac{1+\omega+\omega^2+\omega^4}{1+2\omega^3+\omega^4} \end{align*}</p> </blockquote>
Anas c
832,806
<p><span class="math-container">$\frac{w}{1+w^2 }+\frac{w^3 }{1+w} +\frac{w^2 }{1 +w^4}+\frac{w^4}{1+w^3 }=\frac{w+w^2 +w^3 +w^5}{(1+w)(1+w^2) }+\frac{w^2 +w^5+w^4+w^8}{(1+w^4)(1+w^3) }=\frac{w+w^2 +w^3 +w^5}{w+w^2 +w^3 +1}+\frac{w^2 +1+w^4+w^3 }{1+w^3 +w^4+w^2} =2$</span> Because <span class="math-container">$w^5 =1\Rightarrow w^7=w^2$</span> and <span class="math-container">$w ^8 = w^3$</span></p>
3,059,857
<p>We are supposed to use this formula for which I can't find any explaination anywhere and our teacher didn't explain anything so if anyone could help me I would appreciate it. </p> <p><span class="math-container">$ x = A + k \times 2\pi$</span></p> <p>and</p> <p><span class="math-container">$x = \pi - A + k \times 2\pi$</span> </p> <p>where <span class="math-container">$k$</span> is supposed to be a random integer? and A in this case is <span class="math-container">$\frac{11}{9}\pi$</span> </p>
Vasili
469,083
<p>When you have equation <span class="math-container">$\sin x=a$</span>, the solutions are <span class="math-container">$x=\arcsin a +2\pi n$</span> and <span class="math-container">$x=\pi-\arcsin a + 2\pi n$</span>. This is based on the identity <span class="math-container">$\sin(\pi-a)=\sin a$</span>. In your case <span class="math-container">$a=\sin(\frac{11\pi}{9})$</span> so the solutions are <span class="math-container">$x=\arcsin(\sin(\frac{11\pi}{9}))+2\pi n=-\frac{2\pi}{9}+2\pi n$</span> and <span class="math-container">$x=\pi-\arcsin(\sin(\frac{11\pi}{9}))+2\pi n=\frac{11\pi}{9}+2\pi n$</span>. Term <span class="math-container">$2\pi n$</span> is added becase sine is <span class="math-container">$2 \pi$</span> periodic thus <span class="math-container">$\sin(a+2\pi n)=\sin(a)$</span></p>
2,054,175
<p>This problem is giving me loads of confusion. I just need someone to walk through it because I have the answer and I can't get to it to save my life. I have been on it for days. Please help.</p> <p>$$\frac{x + 3}{x - 4}\le 0$$ </p>
Dr. Sonnhard Graubner
175,066
<p>we have only two cases: a) $$x\geq -3$$ and $$x&lt;4$$ or b) $$x\le -3$$ and $$x&gt;4$$ and this is impossible. Thus we have $$-3\le x&lt;4$$</p>
1,212,000
<p>I was trying to solve this square root problem, but I seem not to understand some basics. </p> <p>Here is the problem.</p> <p>$$\Bigg(\sqrt{\bigg(\sqrt{2} - \frac{3}{2}\bigg)^2} - \sqrt[3]{\bigg(1 - \sqrt{2}\bigg)^3}\Bigg)^2$$</p> <p>The solution is as follows:</p> <p>$$\Bigg(\sqrt{\bigg(\sqrt{2} - \frac{3}{2}\bigg)^2} - \sqrt[3]{\bigg(1 - \sqrt{2}\bigg)^3}\Bigg)^2 = \Bigg(\frac{3}{2} - \sqrt{2} - 1 + \sqrt{2}\Bigg)^2 = \bigg(\frac{1}{2}\bigg)^2 = \frac{1}{4}$$</p> <p>Now, what I don't understand is how the left part of the problem becomes: $$\frac{3}{2} - \sqrt{2}$$</p> <p>Because I thought that $$\sqrt{\bigg(\sqrt{2} - \frac{3}{2}\bigg)^2}$$ equals to $$\bigg(\bigg(\sqrt{2} - \frac{3}{2}\bigg)^2\bigg)^{\frac{1}{2}}$$ Which becomes $$\sqrt{2} - \frac{3}{2}$$</p> <p>But as you can see I'm wrong. </p> <p>I think that there is a step involving absolute value that I oversee/don't understand. So could you please explain by which property or rule of square root is this problem solved? </p> <p>Thanks in advance</p>
Mankind
207,432
<p>Nicely put question.</p> <p>You are right about the absolute value missing somewhere. Indeed, we have:</p> <p>$$\sqrt{x^2} = |x|.$$</p> <p>In your case, we have</p> <p>$$\sqrt{\left(\sqrt{2}-\frac{3}{2}\right)^2}=\left|\sqrt{2}-\frac{3}{2}\right|.$$</p> <p>But $\sqrt{2}-\frac{3}{2}$ is negative, so the absolute value "chooses" the positive version of this, that is,</p> <p>$$\left|\sqrt{2}-\frac{3}{2}\right| = -\left(\sqrt{2}-\frac{3}{2}\right)=\frac{3}{2}-\sqrt{2}.$$</p> <p>I hope this helps.</p>
3,340,686
<p>The <span class="math-container">$7$</span>th floor of a building is <span class="math-container">$23$</span>m above street level and <span class="math-container">$13$</span>th floor is <span class="math-container">$41$</span>m above street level. What is the height (above street level) of the first floor and what is the height of one floor?</p> <p>My working out is this:</p> <p><span class="math-container">$t_7=23$</span></p> <p><span class="math-container">$t_{13}=41$</span></p> <p><span class="math-container">$t_n=t_1 + (n-1) d$</span></p> <p><span class="math-container">$23= t_1 + 6d (1)$</span></p> <p><span class="math-container">$41 = t_1 + 12d (2)$</span></p> <p><span class="math-container">$18= 6d$</span></p> <p><span class="math-container">$d=3$</span></p> <p>Sub <span class="math-container">$3$</span> into equation <span class="math-container">$2$</span></p> <p><span class="math-container">$41=t_1 + 12(3)$</span></p> <p><span class="math-container">$41= t_1 + 36$</span></p> <p><span class="math-container">$5 = t_1$</span></p> <p>Can someone verify if I am doing it correct?</p>
NoChance
15,180
<p>Yes you are correct. The first floor's hight above street level is (in meters): <span class="math-container">$$a_1=5 $$</span></p> <p>Also, with <span class="math-container">$d$</span> indicating d is the difference between terms of the arithmetic progression, <span class="math-container">$$d=3$$</span></p> <p>You should be able to verify the answer using the equation for the height of any floor from street level in meters to be:</p> <p><span class="math-container">$$a_n=5+(n-1)(3) \tag1$$</span> </p> <p>You already know that <span class="math-container">$a_7=23$</span>, so:</p> <p>Using (1), with <span class="math-container">$n=7$</span></p> <p><span class="math-container">$$a_7= 5+ (7-1)(3)=5+18=23$$</span></p> <p>Same goes for the <span class="math-container">$13th$</span> floor, its height above the street in meters is <span class="math-container">$41$</span> using (1) again:</p> <p><span class="math-container">$$a_{13}=5+(13-1)3=5+36=41$$</span></p> <p>Note that this site has a consice summary of <a href="https://en.wikipedia.org/wiki/Arithmetic_progression" rel="nofollow noreferrer">Arithmatic Progression Formulae</a>.</p>
1,893,168
<p>$$\lim_{x\to 0} {\ln(\cos x)\over \sin^2x} = ?$$</p> <p>I can solve this by using L'Hopital's rule but how would I do this without this?</p>
Bill
361,593
<p>We can solve this problem by using our knowledge of limits for composite functions.</p> <p>Let $u = \sin^2 x.$</p> <p>$x \to 0 \implies \sin^2 x \to 0 \implies u \to 0$</p> <p>$\frac{\ln (\cos x)}{\sin^2x} = \frac{\ln (\cos^2 x)}{2\sin^2x} = \frac{\ln (1 - \sin^2 x)}{2\sin^2x} = \frac{\ln (1 - u)}{2u} = 1/2 \ln(1-u)^\frac{1}{u}$</p> <p>Note that $\ln(1-u)^\frac{1}{u}$ is continuous at u = 0 and is equal to $1/e$, so we use the limit rule for composite functions to get </p> <p>$\lim_{x \to 0} \frac{\ln (\cos x)}{\sin^2 x} = \frac{1}{2} \lim_{u \to 0}\ln(1-u)^\frac{1}{u} = \frac{1}{2}\ln \frac{1}{e} = -\frac{1}{2}$</p>
2,834,864
<p>Is it safe to assume that if $a\equiv b \pmod {35 =5\times7}$</p> <p>then $a\equiv b\pmod 5$ is also true?</p>
Billy
13,942
<blockquote> <p>A=hH=kK</p> </blockquote> <p>Correct - and, therefore, $H = h^{-1}kK$. But $H$ is a group, so must contain the identity element $e$. So, as $e\in h^{-1}kK$, it's easy to see that $K$ must contain the element $k^{-1}h$. But now $K$ is a group too, so it's closed under inverses, and so it contains $(k^{-1}h)^{-1} = h^{-1} k$. Hence $h^{-1}kK$ is just $K$.</p>
2,482,250
<p>Consider the generating function $$\frac1{1-2tx+t^2}=\sum_{n=0}^{\infty}y_n(x)t^n$$. I wish to find a second order differential equation of the form $$p(x)y_n''(x)+q(x)y_n'(x)+\lambda_ny_n(x)=0$$ and a recurrence relation satisfied by $y_n(x)$ of the form $$a_ny_{n+1}(x)+b_ny_n(x)+c_ny_{n-1}(x)=xy_n(x)$$. How should I proceed? Note that differentiating with respect to $t$ or $x$ eventually lands me with an expression involving coefficients of $x$ or $t$ respectively. The generating function is just square of the one for Legendre's polynomial. Does that have anything got to simplify the problem? Any hints. Thanks beforehand. </p>
vidyarthi
349,094
<p>After a little search, the generating function is just the one for Chebyshev polynomials of the second kind, $U_n(x)$. Thus from <a href="https://en.wikipedia.org/wiki/Chebyshev_polynomials" rel="nofollow noreferrer">Wikipedia</a>, the desired differential equation and recurrence relation are:</p> <p>Recurrence relation $$y_{n+1}(x)-xy_n(x)+y_{n-1}(x)=0$$ and </p> <p>Differential equation $$(1-x^2)y_n''(x)+3xy_n'(x)-n(n+2)y_n(x)=0$$.</p> <p>The derivation for even a more general kind of functions, called Gugenbauer polynomials or ultraspherical polynomials is given in Stein and Weiss book on Introduction to Fourier Analysis</p>
2,482,250
<p>Consider the generating function $$\frac1{1-2tx+t^2}=\sum_{n=0}^{\infty}y_n(x)t^n$$. I wish to find a second order differential equation of the form $$p(x)y_n''(x)+q(x)y_n'(x)+\lambda_ny_n(x)=0$$ and a recurrence relation satisfied by $y_n(x)$ of the form $$a_ny_{n+1}(x)+b_ny_n(x)+c_ny_{n-1}(x)=xy_n(x)$$. How should I proceed? Note that differentiating with respect to $t$ or $x$ eventually lands me with an expression involving coefficients of $x$ or $t$ respectively. The generating function is just square of the one for Legendre's polynomial. Does that have anything got to simplify the problem? Any hints. Thanks beforehand. </p>
Chappers
221,811
<p>The three-term relation is easy in this case: simply multiply both sides by $1-2xt+t^2$, and then we have $$ 1 = \sum_{n=0}^{\infty} (1-2xt+t^2)y_n(x)t^n, $$ and equating powers of $t^{n+1}$ gives $$ y_{n-1}(x) -2xy_n(x)+y_{n+1}(x) = 0 $$ for $n \geq 1$ (this falls apart at the bottom $n$, of course, as one would expect).</p> <p>The differential equation is harder. We can find the differentiated terms by differentiating the generating function with respect to $x$, since $$ \frac{\partial}{\partial x} \sum_{n=0}^{\infty} y_n(x)t^n = \sum_{n=0}^{\infty} y_n'(x)t^n. $$ Thus $p(x)y_n''(x)+q(x)y_n'(x)$ is the coefficient of $t^n$ in the series expansion of $$ \left( p(x) \frac{\partial^2}{\partial x^2} + q(x) \frac{\partial}{\partial x} \right) \frac{1}{1-2xt+t^2} = \frac{2qt + (8p-4qx)t^2 + 2qt^3}{(1-2xt+t^2)}. $$ We need to match this with a $\lambda_n y_n$ term. The right way to do this is to differentiate with respect to $t$ and then multiply by $t$ to keep the powers the same, which gives $$ t\frac{\partial}{\partial t} \sum_{n=0} y_n(x)t^n = \sum_{n=0}^{\infty} ny_n(x) t^n.$$ We then match up the coefficients of $t$ in the generating function expression with those we derived by differentiating with respect to $x$ to make them vanish and, therefore find a polynomial in $n$ that works. A general rule is that two differentiations should be sufficient (giving a quadratic in $n$), so here we have $$ \left( A \left( t\frac{\partial}{\partial t} \right)^2 + B t\frac{\partial}{\partial t} + C \right) \frac{1}{1-2xt+t^2} = \\ \frac{1}{(1-2xt+t^2)^3}(C + (2 A x + 2 B x - 4 C x)t + (-4 A - 2 B + 2 C + 4 A x^2 - 4 B x^2 + 4 C x^2) t^2 + (-6 A x + 6 B x - 4 C x) t^3 + (4 A - 2 B + C) t^4 ) $$</p> <p>Now equating coefficients of $t$ in $$p\partial_{x}^2 G+q \partial_x G + (A(t\partial_t)^2+B(t\partial_t)+C)G = 0$$ gives \begin{align} 0 + C &amp;= 0 \\ 2q + 2 A x + 2 B x - 4 C x &amp;= 0 \\ 8p-4qx -4 A - 2 B + 2 C + 4 A x^2 - 4 B x^2 + 4 C x^2 &amp;= 0 \\ 2q -6 A x + 6 B x - 4 C x &amp;= 0 \\ 0 + 4 A - 2 B + C &amp;= 0 \end{align} Looks bad, but isn't since the outer equations are simple. We immediately find $C=0$ and $B=2A$, simplifying to \begin{align} q + 3 A x &amp;= 0 \\ 2p-qx -2 A - A x^2 &amp;= 0 \\ q +3 A x &amp;= 0 \\ \end{align} Hence $q=-3Ax$, and then $p=A(1-x^2)$, which gives $$ (1-x^2)y_n''-3xy_n' + n(n+2) y_n = 0. $$</p>
960,010
<p>Two sides of a triangle are 15cm and 20cm long respectively. $A)$ How fast is the third side increasing if the angle between the given sidesis 60 degrees and is increasing at the rate of $2^\circ/sec$? $B)$ How fast is the area increasing?</p> <p>$A)$ I used $c^2=a^2+b^2-2ab\cos(\theta)$ so I got the missing side $c=28.72$ is this right? and then I get confused with the implicit differentiation where, $2c$$\frac{dc}{dt}$ $= $2a$\frac{da}{dt}$ $+$ $2b$$\frac{db}{dt}$ $-$ $2\cos(\theta)$ ($a$$\frac{db}{dt}$+$b$$\frac{da}{db})$ I know that 60degrees is $\frac{Pi}{3}$ to radiance but I keep on getting the wrong answer, they said it's supposed to be $\frac{dc}{dt}=0.5$ I don't know what to substitute with $\frac{da}{dt}$ and $\frac{db}{dt}$ is it the $2^\circ/sec$ ? I'm so confused.</p> <p>$B)$ I also get confuse here on what formula to use.</p>
JimmyK4542
155,509
<p>In part a), $\theta$ is the only variable mentioned as changing. </p> <p>So $\dfrac{d\theta}{dt} = 2^{\circ}/\text{sec} = \dfrac{\pi}{90}\text{rad/sec}$ and $\dfrac{da}{dt} = \dfrac{db}{dt} = 0 \text{cm/sec}$. </p> <p>Can you show what you plugged into the formula to get $c \approx 28.72$? I got $c = 5\sqrt{13} \approx 18.03$.</p> <p>Also, when you differentiated $c^2 = a^2+b^2-2ab\cos\theta$ you should get $2c\dfrac{dc}{dt} = 2ab\sin\theta \dfrac{d\theta}{dt}$</p> <p>In part b), note that the area of a triangle is $A = \dfrac{1}{2}ab\sin\theta$. </p>
2,725,019
<blockquote> <p>Define $S := \{x ∈ \mathbb Q : x^2 ≤ 2\}$. Prove that $a:=\inf \{S\}$ satisfies $a^2 = 2$.</p> </blockquote> <p>Since a is a lower bound for S, we have $a^2\le 2$. if $a^2\neq 2$ then $a^2 &lt; 2$ and we may set $\epsilon:= a^2 − 2 &gt; 0$ and then I am not sure how to show it satisfies $a^2 = 2$.</p>
Jimmy R.
128,037
<p>Consider $a=-\sqrt{2}$ and show that $a=\inf\{S\}$. This is equivalent to showing that</p> <ul> <li>for every $x\in S$, it holds that $x\ge -\sqrt{2}$.</li> <li>for every $\epsilon&gt;0$, there is $x\in S$ such that $x&lt;-\sqrt{2}+\epsilon$.</li> </ul> <p>The first one should be straightforward, for the second take $\epsilon&gt;0$ arbitrary and consider $x=-\sqrt{2}+\epsilon/2$. </p> <p>Just note that $-\sqrt{2}\notin S$ since $-\sqrt{2}\notin \mathbb Q$, so $-\sqrt{2}$ is indeed an $\inf$ but not a $\min$ of set $S$.</p>
2,725,019
<blockquote> <p>Define $S := \{x ∈ \mathbb Q : x^2 ≤ 2\}$. Prove that $a:=\inf \{S\}$ satisfies $a^2 = 2$.</p> </blockquote> <p>Since a is a lower bound for S, we have $a^2\le 2$. if $a^2\neq 2$ then $a^2 &lt; 2$ and we may set $\epsilon:= a^2 − 2 &gt; 0$ and then I am not sure how to show it satisfies $a^2 = 2$.</p>
Piquito
219,998
<p>HINT.- $x^2\le2\iff-\sqrt2\le x\le \sqrt2$ and $(-\sqrt2)^2=(\sqrt2)^2=2.$</p>
2,491,448
<p>We roll a die ten times. What's the probability of getting all the six different numbers of the dice?</p> <p>If $A_i$ is the event of getting at least of the numbers from $i=1$ to $6$. What the problem is asking is $P(A_1 \cap A_2 \cap A_3 \cap A_4 \cap A_5 \cap A_6)$ . So I guess I will get this probability if I develop the exclusion inclusion formula for the union of different events. Is the starting point to find $P(A_1 \cup A_2 \cup A_3 \cup A_4 \cup A_5 \cup A_6 \cup A_7 \cup A_8 \cup A_9 \cup A_{10})$ ? Where $A_i$ would be here each throw of the dice?</p> <p>We were asked to do it by the inclusion exclusion principle,as the tittle of the problem says. TIA for any hint or answer.</p>
N. F. Taussig
173,070
<p>If there were no restrictions, there would be six possible outcomes for each of the ten throws, so there are $6^{10}$ possible sequences of throws. From these, we must exclude those in which fewer than six outcomes occur. </p> <p>There are $\binom{6}{k}$ ways to exclude $k$ of the $6$ outcomes and $(6 - k)^{10}$ possible sequences involving only the remaining $6 - k$ possible outcomes of a die throw. By the Inclusion-Exclusion Principle, the number of ways all six outcomes can occur when a six-sided die is tossed ten times is $$\sum_{k = 0}^{6} (-1)^k\binom{6}{k}(6 - k)^{10} = \binom{6}{0}6^{10} - \binom{6}{1}5^{10} + \binom{6}{2}4^{10} - \binom{6}{3}3^{10} + \binom{6}{4}2^{10} - \binom{6}{5}1^{10} + \binom{6}{6}0^{10}$$ </p>
3,278
<h3>What are Community Promotion Ads?</h3> <p>Community Promotion Ads are community-vetted advertisements that will show up on the main site, in the right sidebar. The purpose of this question is the vetting process. Images of the advertisements are provided, and community voting will enable the advertisements to be shown.</p> <h3>Why do we have Community Promotion Ads?</h3> <p>This is a method for the community to control what gets promoted to visitors on the site. For example, you might promote the following things:</p> <ul> <li>the site's twitter account</li> <li>useful tools or resources for the mathematically inclined</li> <li>interesting articles or findings for the curious</li> <li>cool events or conferences</li> <li>anything else your community would genuinely be interested in</li> </ul> <p>The goal is for future visitors to find out about <em>the stuff your community deems important</em>. This also serves as a way to promote information and resources that are <em>relevant to your own community's interests</em>, both for those already in the community and those yet to join. </p> <h3>How does it work?</h3> <p>The answers you post to this question <em>must</em> conform to the following rules, or they will be ignored. </p> <ol> <li><p>All answers should be in the exact form of:</p> <pre><code>[![Tagline to show on mouseover][1]][2] [1]: http://image-url [2]: http://clickthrough-url </code></pre> <p>Please <strong>do not add anything else to the body of the post</strong>. If you want to discuss something, do it in the comments.</p></li> <li><p>The question must always be tagged with the magic <a href="/questions/tagged/community-ads" class="post-tag moderator-tag" title="show questions tagged 'community-ads'" rel="tag">community-ads</a> tag. In addition to enabling the functionality of the advertisements, this tag also pre-fills the answer form with the above required form.</p></li> </ol> <h3>Image requirements</h3> <ul> <li>The image that you create must be <strong>220 x 250 pixels</strong></li> <li>Must be hosted through our standard image uploader (imgur)</li> <li>Must be GIF or PNG</li> <li>No animated GIFs</li> <li>Absolute limit on file size of 150 KB</li> </ul> <h3>Score Threshold</h3> <p>There is a <strong>minimum score threshold</strong> an answer must meet (currently <strong>6</strong>) before it will be shown on the main site.</p> <p>You can check out the ads that have met the threshold with basic click stats <a href="http://meta.math.stackexchange.com/ads/display/3278">here</a>.</p>
Grace Note
14,141
<p><a href="http://twitter.com/#!/stackmath" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hfof3.png" alt="Follow us on Twitter!"></a></p>
3,278
<h3>What are Community Promotion Ads?</h3> <p>Community Promotion Ads are community-vetted advertisements that will show up on the main site, in the right sidebar. The purpose of this question is the vetting process. Images of the advertisements are provided, and community voting will enable the advertisements to be shown.</p> <h3>Why do we have Community Promotion Ads?</h3> <p>This is a method for the community to control what gets promoted to visitors on the site. For example, you might promote the following things:</p> <ul> <li>the site's twitter account</li> <li>useful tools or resources for the mathematically inclined</li> <li>interesting articles or findings for the curious</li> <li>cool events or conferences</li> <li>anything else your community would genuinely be interested in</li> </ul> <p>The goal is for future visitors to find out about <em>the stuff your community deems important</em>. This also serves as a way to promote information and resources that are <em>relevant to your own community's interests</em>, both for those already in the community and those yet to join. </p> <h3>How does it work?</h3> <p>The answers you post to this question <em>must</em> conform to the following rules, or they will be ignored. </p> <ol> <li><p>All answers should be in the exact form of:</p> <pre><code>[![Tagline to show on mouseover][1]][2] [1]: http://image-url [2]: http://clickthrough-url </code></pre> <p>Please <strong>do not add anything else to the body of the post</strong>. If you want to discuss something, do it in the comments.</p></li> <li><p>The question must always be tagged with the magic <a href="/questions/tagged/community-ads" class="post-tag moderator-tag" title="show questions tagged 'community-ads'" rel="tag">community-ads</a> tag. In addition to enabling the functionality of the advertisements, this tag also pre-fills the answer form with the above required form.</p></li> </ol> <h3>Image requirements</h3> <ul> <li>The image that you create must be <strong>220 x 250 pixels</strong></li> <li>Must be hosted through our standard image uploader (imgur)</li> <li>Must be GIF or PNG</li> <li>No animated GIFs</li> <li>Absolute limit on file size of 150 KB</li> </ul> <h3>Score Threshold</h3> <p>There is a <strong>minimum score threshold</strong> an answer must meet (currently <strong>6</strong>) before it will be shown on the main site.</p> <p>You can check out the ads that have met the threshold with basic click stats <a href="http://meta.math.stackexchange.com/ads/display/3278">here</a>.</p>
Ilmari Karonen
9,602
<p><a href="http://citeseer.ist.psu.edu/" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Mll8O.png" alt="Scientific Literature Digital Library &amp; Search Engine"></a></p>
3,278
<h3>What are Community Promotion Ads?</h3> <p>Community Promotion Ads are community-vetted advertisements that will show up on the main site, in the right sidebar. The purpose of this question is the vetting process. Images of the advertisements are provided, and community voting will enable the advertisements to be shown.</p> <h3>Why do we have Community Promotion Ads?</h3> <p>This is a method for the community to control what gets promoted to visitors on the site. For example, you might promote the following things:</p> <ul> <li>the site's twitter account</li> <li>useful tools or resources for the mathematically inclined</li> <li>interesting articles or findings for the curious</li> <li>cool events or conferences</li> <li>anything else your community would genuinely be interested in</li> </ul> <p>The goal is for future visitors to find out about <em>the stuff your community deems important</em>. This also serves as a way to promote information and resources that are <em>relevant to your own community's interests</em>, both for those already in the community and those yet to join. </p> <h3>How does it work?</h3> <p>The answers you post to this question <em>must</em> conform to the following rules, or they will be ignored. </p> <ol> <li><p>All answers should be in the exact form of:</p> <pre><code>[![Tagline to show on mouseover][1]][2] [1]: http://image-url [2]: http://clickthrough-url </code></pre> <p>Please <strong>do not add anything else to the body of the post</strong>. If you want to discuss something, do it in the comments.</p></li> <li><p>The question must always be tagged with the magic <a href="/questions/tagged/community-ads" class="post-tag moderator-tag" title="show questions tagged 'community-ads'" rel="tag">community-ads</a> tag. In addition to enabling the functionality of the advertisements, this tag also pre-fills the answer form with the above required form.</p></li> </ol> <h3>Image requirements</h3> <ul> <li>The image that you create must be <strong>220 x 250 pixels</strong></li> <li>Must be hosted through our standard image uploader (imgur)</li> <li>Must be GIF or PNG</li> <li>No animated GIFs</li> <li>Absolute limit on file size of 150 KB</li> </ul> <h3>Score Threshold</h3> <p>There is a <strong>minimum score threshold</strong> an answer must meet (currently <strong>6</strong>) before it will be shown on the main site.</p> <p>You can check out the ads that have met the threshold with basic click stats <a href="http://meta.math.stackexchange.com/ads/display/3278">here</a>.</p>
Ilmari Karonen
9,602
<p><a href="http://planetmath.org/" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GaFJ9.png" alt="PlanetMath - Math for the people, by the people."></a></p>
3,278
<h3>What are Community Promotion Ads?</h3> <p>Community Promotion Ads are community-vetted advertisements that will show up on the main site, in the right sidebar. The purpose of this question is the vetting process. Images of the advertisements are provided, and community voting will enable the advertisements to be shown.</p> <h3>Why do we have Community Promotion Ads?</h3> <p>This is a method for the community to control what gets promoted to visitors on the site. For example, you might promote the following things:</p> <ul> <li>the site's twitter account</li> <li>useful tools or resources for the mathematically inclined</li> <li>interesting articles or findings for the curious</li> <li>cool events or conferences</li> <li>anything else your community would genuinely be interested in</li> </ul> <p>The goal is for future visitors to find out about <em>the stuff your community deems important</em>. This also serves as a way to promote information and resources that are <em>relevant to your own community's interests</em>, both for those already in the community and those yet to join. </p> <h3>How does it work?</h3> <p>The answers you post to this question <em>must</em> conform to the following rules, or they will be ignored. </p> <ol> <li><p>All answers should be in the exact form of:</p> <pre><code>[![Tagline to show on mouseover][1]][2] [1]: http://image-url [2]: http://clickthrough-url </code></pre> <p>Please <strong>do not add anything else to the body of the post</strong>. If you want to discuss something, do it in the comments.</p></li> <li><p>The question must always be tagged with the magic <a href="/questions/tagged/community-ads" class="post-tag moderator-tag" title="show questions tagged 'community-ads'" rel="tag">community-ads</a> tag. In addition to enabling the functionality of the advertisements, this tag also pre-fills the answer form with the above required form.</p></li> </ol> <h3>Image requirements</h3> <ul> <li>The image that you create must be <strong>220 x 250 pixels</strong></li> <li>Must be hosted through our standard image uploader (imgur)</li> <li>Must be GIF or PNG</li> <li>No animated GIFs</li> <li>Absolute limit on file size of 150 KB</li> </ul> <h3>Score Threshold</h3> <p>There is a <strong>minimum score threshold</strong> an answer must meet (currently <strong>6</strong>) before it will be shown on the main site.</p> <p>You can check out the ads that have met the threshold with basic click stats <a href="http://meta.math.stackexchange.com/ads/display/3278">here</a>.</p>
JDH
413
<p><a href="http://cantorsattic.info" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UAtF2.png" alt="Climb into Cantor&#39;s Attic, containing infinities large and small"></a></p>
1,670,074
<p>We know that a necessary and sufficient condition for a path-connected, locally path-connected space to have a universal cover is that it is semi-locally simply connected.</p> <p>Now since $\mathbb R^2\setminus\{0\}$ is such a space, it must have a universal cover. However I can't see what the universal cover of $\mathbb R^2\setminus\{0\}$ actually is. Can someone help me?</p> <p>Thank you.</p>
Jyrki Lahtonen
11,619
<p>Hint: Think of the mapping $z\mapsto e^z$ from $\Bbb{C}$ to $\Bbb{C}\setminus\{0\}$. Its derivative is also $e^z$, which is always non-zero. Therefore the mapping is conformal everywhere, i.e. a local homeomorphism.</p>
1,820,036
<p>I'd be thankful if some could explain to me why the second equality is true... I just can't figure it out. Maybe it's something really simple I am missing?</p> <blockquote> <p>$\displaystyle\lim_{\epsilon\to0}\frac{\det(Id+\epsilon H)-\det(Id)}{\epsilon}=\displaystyle\lim_{\epsilon\to0}\frac{1}{\epsilon}\left[\det \begin{pmatrix} 1+\epsilon h_{11} &amp; \epsilon h_{12} &amp;\cdots &amp; \epsilon h_{1n} \\ \epsilon h_{21} &amp; 1+\epsilon h_{22} &amp;\cdots \\ \vdots &amp; &amp; \ddots \\ \epsilon h_{n1} &amp; &amp; &amp;1+\epsilon h_{nn} \end{pmatrix}-1\right]$</p> <p>$\qquad\qquad\qquad\qquad\qquad\qquad=\displaystyle\sum_{i=1}^nh_{ii}=\text{trace}(H)$</p> </blockquote>
Nicholas Stull
28,997
<p>As I suggested in my comment, we proceed by expanding $$|Id + \varepsilon H| = |A| = \left| \begin{array}{cccc} 1+\varepsilon h_{11} &amp; \varepsilon h_{12} &amp; \cdots &amp; \varepsilon h_{1n}\\ \varepsilon h_{21} &amp; 1+\varepsilon h_{22} &amp; \cdots &amp; \ \\ \vdots &amp; \ &amp; \ddots &amp; \ &amp;\\ \varepsilon h_{n1} &amp; \ &amp; \ &amp; 1+\varepsilon h_{nn} \end{array} \right|$$ in powers of $\varepsilon$. First, we have: $$|A| = (1+\varepsilon h_{11}) \left| \begin{array}{cccc} 1+\varepsilon h_{22} &amp; \varepsilon h_{23} &amp; \cdots &amp; \varepsilon h_{2n}\\ \varepsilon h_{32} &amp; 1+\varepsilon h_{33} &amp; \cdots &amp; \ \\ \vdots &amp; \ &amp; \ddots &amp; \ &amp;\\ \varepsilon h_{n2} &amp; \ &amp; \ &amp; 1+\varepsilon h_{nn} \end{array} \right| + \varepsilon\sum_{j=2}^n (-1)^{1+j} h_{1j} \det(A_{1j})$$ where $\det(A_{1j}) = O(\varepsilon)$ (here, I am using the usual notation $A_{1j}$ to be the matrix obtained by deleting the first row and $j$th column from $A$).</p> <p>Justification of this part: The minimal power of $\varepsilon$ would occur when the maximal number of diagonal terms is included. In this case (dealing with $(n-1)\times (n-1)$ minors, this would mean $n-2$ diagonal terms, since all terms of the cofactor expansion along the first row (with the exception of the first one, which I am separating from the rest of the computation) would exclude the $1+\varepsilon h_{11}$, as well as the diagonal entry that sits in the $j$th column. Finally, if there are $n-2$ diagonal terms multiplied together (in the minimal case), then there must be one off-diagonal term, introducing the (claimed) factor of $\varepsilon$.</p> <p>We hence conclude that $$|A| = (1+\varepsilon h_{11}) \left| \begin{array}{cccc} 1+\varepsilon h_{22} &amp; \varepsilon h_{23} &amp; \cdots &amp; \varepsilon h_{2n}\\ \varepsilon h_{32} &amp; 1+\varepsilon h_{33} &amp; \cdots &amp; \ \\ \vdots &amp; \ &amp; \ddots &amp; \ &amp;\\ \varepsilon h_{n2} &amp; \ &amp; \ &amp; 1+\varepsilon h_{nn} \end{array} \right| + O(\varepsilon^2)$$</p> <p>Continuing (inductively) to expand the determinant in this manner, we see that \begin{align} |A| &amp;= \prod_{j=1}^n (1+\varepsilon h_{jj}) + O(\varepsilon^2)\\ &amp;= 1+\varepsilon \sum_{j=1}^n h_{jj} + O(\varepsilon^2) \end{align} which exactly yields the desired equality, since this immediately says $$|A|-1 = \varepsilon \sum_{j=1}^n h_{jj} + O(\varepsilon^2),$$ hence \begin{align} \lim_{\varepsilon \to 0} \frac{1}{\varepsilon}(|A|-1) &amp;= \lim_{\varepsilon\to 0} \frac{1}{\varepsilon} \left( \varepsilon \sum_{j=1}^n h_{jj} + O(\varepsilon^2) \right)\\ &amp;= \lim_{\varepsilon\to 0} \left( \sum_{j=1}^n h_{jj} + O(\varepsilon) \right)\\ &amp;= \sum_{j=1}^n h_{jj} = \text{Trace}(H) \end{align}</p>
1,820,036
<p>I'd be thankful if some could explain to me why the second equality is true... I just can't figure it out. Maybe it's something really simple I am missing?</p> <blockquote> <p>$\displaystyle\lim_{\epsilon\to0}\frac{\det(Id+\epsilon H)-\det(Id)}{\epsilon}=\displaystyle\lim_{\epsilon\to0}\frac{1}{\epsilon}\left[\det \begin{pmatrix} 1+\epsilon h_{11} &amp; \epsilon h_{12} &amp;\cdots &amp; \epsilon h_{1n} \\ \epsilon h_{21} &amp; 1+\epsilon h_{22} &amp;\cdots \\ \vdots &amp; &amp; \ddots \\ \epsilon h_{n1} &amp; &amp; &amp;1+\epsilon h_{nn} \end{pmatrix}-1\right]$</p> <p>$\qquad\qquad\qquad\qquad\qquad\qquad=\displaystyle\sum_{i=1}^nh_{ii}=\text{trace}(H)$</p> </blockquote>
Sheldon Axler
256,061
<p>Here is a conceptual proof that avoids expanding a complicated determinant:</p> <p>The determinant of a linear operator (or of a square matrix) is the product of the eigenvalues, counting multiplicity. The trace of a linear operator (or of a square matrix) is the sum of the eigenvalues, counting multiplicity.</p> <p>Now suppose that $\lambda_1, \dots, \lambda_n$ are the eigenvalues of $H$, counting multiplicity. Then the eigenvalues of $I + \epsilon H$ are $$ 1 + \epsilon \lambda_1, \dots, 1 + \epsilon \lambda_n, $$ counting multiplicity. Thus $$ \det(1 + \epsilon H) = (1 + \epsilon \lambda_1) \cdots (1 + \epsilon \lambda_n). $$ It is now clear that \begin{align*} \lim_{\epsilon\to 0} \frac{ \det(1 + \epsilon H) - 1}{\epsilon} &amp;= \lambda_1 + \dots + \lambda_n\\ &amp;= \text{trace } H, \end{align*} as desired.</p>
747,561
<p>I'm having trouble figuring out the limits. What messes me up is that the limit approaches infinity. Usually it approaches a specific number. Is that a trick to solve problems like these? </p> <p>So for example, use the root test to find convergence/divergence. (n!)^n/(n^n)^7. n=1 and it's to infinity </p>
JEET TRIVEDI
115,676
<p>$$\sum_{n=1}^{\infty}=\dfrac{(n!)^n}{(n^n)^7}$$ so,we let $a_n=\dfrac{(n!)^n}{(n^n)^7}.$</p> <p>To use the root test,we take $$L=\lim_{n\rightarrow \infty}\sqrt[n]{\left|a_n\right|}$$ $$L=\lim_{n\rightarrow \infty}\sqrt[n]{\dfrac{(n!)^n}{(n^n)^7}}$$ $$L=\lim_{n\rightarrow \infty}\dfrac{(n!)}{(n)^7}$$ And as n! increases much faser than $n^7$ as $n\rightarrow\infty$ $$L=\infty$$ As $L\ne0$, The series diverges. </p>
46,236
<p>Apologies for the uninformative title, this is a relatively specific question so it was hard to title. </p> <p>I'm solving the following recurrence relation:</p> <blockquote> <p>$a_{n} + a_{n-1} - 6a_{n-2} = 0$<br> With initial conditions $a_{0} = 3$ and $a_{1} = 1$</p> </blockquote> <p>And I have it mostly figured out except for the very last part.</p> <p>My working:</p> <p>We have characteristic equation $s^2 + s - 6 = 0$ This factorises to $(s+3)(s-2)$<br> Hence we have roots $s=-3$ and $s=2$</p> <p>and hence the <strong>solution has the form $a_{n} = -x3^n + y2^n$</strong></p> <p>We sub in the initial conditions:</p> <p>$a_{0} = x + y = 3$<br> $a_{1} = -3x+2y = 1$<br></p> <p>And solving this system we have solutions: <br> $x = 1$ and $y = 2$</p> <p>Hence subbing this back to what we work out to be the general form of the solution: </p> <p>$a_{n} = (-1)3^n + (2)2^n$ <br> $a_{n} = (-3)^n + (4)^n$ Correct?</p> <p>But it is incorrect, the correct solution is:</p> <p>$a_{n} = (-3)^n + 2^{n+1}$</p> <p>I don't understand where the $2^{n+1}$ came from. What am I missing here?</p>
Shai Covo
2,810
<p>$(2)2^n$ is equal to $2^{n+1}$, not to $4^n$.</p>
3,768,086
<p>Show that <span class="math-container">$(X_n)_n$</span> converges in probability to <span class="math-container">$X$</span> if and only if for every continuous function <span class="math-container">$f$</span> with compact support, <span class="math-container">$f(X_n)$</span> converges in probability to <span class="math-container">$f(X).$</span></p> <p><span class="math-container">$\implies$</span> is very easy, the problem is with the converse. Any suggestions to begin?</p>
IanFromWashington
635,153
<p>Thanks to the comment of @JMoravitz I realized my mistake. I was interpreting turns as the rolls <span class="math-container">$A$</span> AND <span class="math-container">$B$</span>, as in <span class="math-container">$\{A_1,B_1\}, \{A_2,B_2\}, \dots$</span>. In reality the question is merely asking what the probability of <span class="math-container">$B$</span> winning if <span class="math-container">$A$</span> rolls first.</p> <p><strong>The work is as follows:</strong> We calculate the probability of <span class="math-container">$B$</span> winning. Denote the probability of <span class="math-container">$B$</span> winning on their <span class="math-container">$i$</span>th roll as <span class="math-container">$S_i$</span>. Now, the probabilities of <span class="math-container">$B$</span> winning on her first roll, second roll, third roll, etc., are as follows: <span class="math-container">\begin{equation*} P(S_1) = \biggr(\frac{2}{3}\biggr)\biggr(\frac{2}{3}\biggr), \quad P(S_2) = \biggr(\frac{2}{3}\biggr)\biggr(\frac{1}{3}\biggr)\biggr(\frac{2}{3}\biggr)\biggr(\frac{2}{3}\biggr), \quad P(S_3) = \biggr(\biggr(\frac{2}{3}\biggr)\biggr(\frac{1}{3}\biggr)\biggr)^2\biggr(\frac{2}{3}\biggr)\biggr(\frac{2}{3}\biggr), \dots \end{equation*}</span> It then follows that in general that <span class="math-container">$\displaystyle P(S_i) = \biggr(\frac{2}{9}\biggr)^{i-1} \biggr(\frac{4}{9}\biggr).$</span> Thus, it follows that the probability of <span class="math-container">$B$</span> winning is calculated as <span class="math-container">\begin{equation*} P(S) = P\biggr(\bigcup_{i=1}^\infty S_i\biggr) = \sum_{i=1}^\infty P(S_i) = \sum_{i=1}^\infty \biggr(\frac{2}{9}\biggr)^{i-1} \biggr(\frac{4}{9}\biggr) = \frac{4}{9} \sum_{i=1}^\infty \biggr(\frac{2}{9}\biggr)^{i-1} = \frac{4}{9} \cdot \frac{9}{7} = \frac{4}{7}. \end{equation*}</span></p>
1,428,377
<p>So I was watching the show Numb3rs, and the math genius was teaching, and something he did just stumped me.</p> <p>He was asking his class (more specifically a student) on which of the three cards is the car. The other two cards have an animal on them. Now, the student picked the middle card to begin with. So the cards looks like this</p> <pre><code>+---+---+---+ | 1 | X | 3 | +---+---+---+ </code></pre> <p><em>The <code>X</code> Representing The Picked Card</em></p> <p>Then he flipped over the third card, and it turned out to be an animal. All that is left now is one more animal, and a car. He asks the student if the chances are higher of getting a car if they switch cards. The student responds no (That's what I thought too).</p> <p>The student was wrong. What the teacher said is "Switching cards actually doubles your chances of getting the car".</p> <p>So my question is, why does switching selected cards double your chances of getting the car when 1 of the 3 cards are already revealed. I thought it would just be a simple 50/50 still, please explain why the chances double!</p>
Marconius
232,988
<p>This is the classic version of the Monty Hall problem.</p> <p>Note that there is only one car, so the host is always able to reveal a goat behind one of the other two doors that were unselected. So the fact that he does this reveals no new information if the contestant does not switch.</p> <p>So by not switching:</p> <p>$$P_n = \frac{1}{3}$$</p> <p>If on the other hand the contestant switches, he has inverted his probability of winning (wins when he would have lost if not switching, and vice-versa), so:</p> <p>$$P_s = 1 - P_n = \frac{2}{3}$$</p>
371,318
<p>The original problem was to consider how many ways to make a wiring diagram out of $n$ resistors. When I thought about this I realized that if you can only connect in series and shunt. - Then this is the same as dividing an area with $n-1$ horizontal and vertical lines. When each line only divides one of the current area sections into two smaller ones.</p> <p>This is also the same as the number of ways to make a set of $n$ (and only $n$) rectangles into a bigger rectangle. If the rectangles can be drawn by dividing the big rectangle, line by line, into the set of rectangles without lose endpoints of the line. - Can someone come to think of "a expression of $n$" which equals this amount, independent of the order of the rectangles or position?</p> <p>(It is only the relations between the area sections that matters and not left or right, up or down. However dividing an area with a horizontal line is not the same as dividing it with a vertical line.)</p>
OctarineBean
111,531
<p>This is a question for surreal numbers. Surreal numbers are a really amazing thing invented by John Conway that include numbers like 0 and 3/4, but also things like "twice the square root of infinity, all plus an infinitesimal". This question depends on the values of the infinite and infinitesimal, but the way it works is this. The number ω is defined as the number of items in the set {0,1,2,3,4,5...}, so it's infinite. The number ε is defined as 1/ω. So ω*ε is obviously one. If you think about it a bit, it makes sense that 2ε^2 * ω is 2ε, and so on.</p> <pre><code> http://en.wikipedia.org/wiki/Surreal_number </code></pre>
498,694
<p>So, I'm learning limits right now in calculus class.</p> <p>When $x$ approaches infinity, what does this expression approach?</p> <p>$$\frac{(x^x)}{(x!)}$$</p> <p>Why? Since, the bottom is $x!$, doesn't it mean that the bottom goes to zero faster, therefore the whole thing approaches 0?</p>
Glen O
67,842
<p>I figured I'd go for a more "proper" proof. Notice that, if we let $a_n = n^n/n!$ we can write that</p> <p>$$ a_{n+1}-a_n = \frac{(n+1)^{n+1}}{(n+1)!}-\frac{n^n}{n!} = \frac{(n+1)^{n+1}-(n+1)n^n}{(n+1)!} $$ which can then be written as $$ \frac{(n+1)^n-n^n}{n!} $$</p> <p>Now, using the binomial theorem, the first term remaining in the numerator is $$ \binom{n}{1}n^n\cdot1^n = n^{n+1} $$ And all terms in the numerator are positive. Therefore, we have $$ a_{n+1}-a_n &gt; \frac{n^{n+1}}{n!} = na_n $$ and so $$ a_{n+1} &gt; (n+1)a_n $$ Therefore, as $a_1=1$, we can clearly see that (for $n&gt;1$) $$ a_n &gt; \frac{n!}2 $$ and thus $$ \lim_{n\to \infty} a_n \to \infty $$</p>
237,197
<p>I'm new here. If there is anything not appropriate pls let me know.</p> <p>I am currently working on a differential equation with one of which term is a integral of the variable.</p> <p><span class="math-container">$$ \frac{d^2u(x)}{dx^2}=cosh(G(x))+\frac{1}{C_1}\int_{0}^{1}{u(x)sinh(G(x))dx }+C_2 $$</span> with the boundary condition, <span class="math-container">$ u(x=0)=0$</span> and <span class="math-container">$u'(x=1)=0$</span>,</p> <p>where <span class="math-container">$ G(x)= 2ln(\frac{1+C_3\cdot exp(-x)}{1-C_3\cdot exp(-x)})$</span> and C1, C2 and C3 are system constants which could be predefined.</p> <p>To show it more simply, I let <span class="math-container">$C_1=C_2=C_3=1$</span> in the code below,</p> <pre><code>G[x_] = 2 Log[(1 + Exp[-x])/(1 - Exp[-x])]; </code></pre> <pre><code>Sol = NDSolveValue[ { u''[x] == Cosh[ G[x] ] + NIntegrate[ u[x] *Sinh[ G[x] ],{x,0,1}] + 1 ,u'[1] == 0., u[0] == 0. } , u, {x, 0, 1}, PrecisionGoal -&gt; 10] ; </code></pre> <p>However, since u(x) in the integral have yet been solved, the numerical integration would fail with the error message show:</p> <p><code>&quot;The integrand {} has evaluated to non-numerical values for all sampling points in the region with boundaries {{0,1}}&quot;*</code></p> <p>Some suggested by breaking the procedure of NDSolveValue in parts with the NIntegrate inserted. However, I am not sure how to do it correctly in Mathematica.</p> <p>Thanks for your kindly help, I am really appreciated it!</p> <p><strong>EDIT 1</strong></p> <p>Special thanks to Tugrul Temel and Alex Trounev, who shows the singularity point at x = 0 for G[x] function. I make a adjustment follow with Alex Trounev, show below, for making the problem solvable! <span class="math-container">$ G(x)= 2ln(\frac{1+exp(-x)}{1-C_3\cdot exp(-x)})$</span> where <span class="math-container">$ C_3 = 0.99 $</span></p>
Akku14
34,287
<p>You can get an analytical solution for free c1,c2,c3.</p> <pre><code>G[x_] = 2 Log[(1 + c3 Exp[-x])/(1 - c3 Exp[-x])] </code></pre> <p>Since the integral is a number, name it nint and solve for it later. Get anylytical solution for this reduced equation. Do indefinite integration. (I don't show the lengthy intermediate results)</p> <pre><code>usol[c1_, c2_, c3_, nint_] = u /. First@ DSolve[{u''[x] == Cosh[G[x]] + 1/c1 nint + c2, u'[1] == 0, u[0] == 0}, u, x] integrand = u[x]*Sinh[G[x]] /. u -&gt; usol[c1, c2, c3, nint] // Simplify mint[x_] = Integrate[integrand, x] mmii[c1_, c2_, c3_, nint_] = Limit[mint[x], x -&gt; 1, Direction -&gt; 1] - Limit[mint[x], x -&gt; 0, Direction -&gt; -1] // Simplify </code></pre> <p>The integral has to be equal the preassumed number nint.</p> <pre><code>nintsol[c1_, c2_, c3_] = nint /. First@Solve[mmii[c1, c2, c3, nint] == nint, nint] // Simplify Manipulate[ Plot[usol[c1, c2, c3, nintsol[c1, c2, c3]][x], {x, 0, 1}, PlotRange -&gt; {{0, 1}, Automatic}, GridLines -&gt; Automatic], {{c1, 1}, -3, 3, Appearance -&gt; &quot;Labeled&quot;}, {{c2, 1}, -3, 3, Appearance -&gt; &quot;Labeled&quot;}, {{c3, .99}, -3, 3, Appearance -&gt; &quot;Labeled&quot;}] </code></pre> <p><a href="https://i.stack.imgur.com/KAMRQ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KAMRQ.jpg" alt="enter image description here" /></a></p>
65,631
<pre><code>Ticker[comp_String] := Interpreter["Company"][comp] /. Entity[_, x_] :&gt; x ticks = Ticker /@ {"Apple", "Google"} </code></pre> <blockquote> <p>{"NASDAQ:AAPL", "NASDAQ:GOOGL"}</p> </blockquote> <pre><code>DateListPlot[{ FinancialData[ticks[[1]], "CumulativeFractionalChange", {2010}], FinancialData[ticks[[2]], "CumulativeFractionalChange", {2010}], FinancialData["NASDAQ100", "CumulativeFractionalChange", {2010}] }, GridLines -&gt; Automatic, PlotLegends -&gt; {ticks[[1]], ticks[[2]], "NASDAQ100"}, Joined -&gt; True, ImageSize -&gt; 500, Filling -&gt; Bottom] </code></pre> <p><img src="https://i.stack.imgur.com/T6oFA.jpg" alt="enter image description here"></p> <p>I have many questions, but only pose two:</p> <p>(1) How can I efficiently apply a moving average of , let's say, 200 days to the above lines?</p> <p>(2) How can I sort the <code>PlotLegends</code>? (NASDAQ100 should appear before NASDAQ:GOOGL)</p>
Kuba
5,478
<p>You may never know how many wrappers there are but those functions have this in common that first argument is what we only care about.</p> <pre><code>f[x_?NumericQ] := N @ x; f[x_] := f @ First[x] </code></pre>
794,912
<p>I am reviewing Calculus III using <a href="http://www.jiblm.org/downloads/dlitem.aspx?id=82&amp;category=jiblmjournal" rel="nofollow">Mahavier, W. Ted's material</a> and get stuck on one question in chapter 1. Here is the problem:</p> <p>Assume $\vec{u},\vec{v}\in \mathbb{R}^3$. Find a vector $\vec{x}=(x,y,z)$ so that $\vec{x}\perp\vec{u}$ and $\vec{x}\perp\vec{v}$ and $x+y+z=1$.</p> <p>My attempt: From the last condition, I know that $\vec{x}$ ends at the plane intersecting the $x-,y-,z-$axis at $(1,0,0),(0,1,0)$ and $(0,0,1)$. From the orthogonal conditions, $\vec{x}$ is perpendicular to the plane formed by $\vec{u},\vec{v}$ if they are distinct, otherwise, any plane that contains $\vec{u},\vec{v}$. </p> <p>Am I on the right track? And how do I go from here? Thanks!</p> <p><strong>Edit</strong>: Thanks for all who responded! I do remember cross product. However, at this point of the book, the definition of cross product has not been introduced yet. I wonder whether there are other means to attack this problem without invoking a to-be-introduced concept?</p> <p>Thanks again!</p>
guest196883
43,798
<p>Do you remember the definition of the cross product? Given vectors $u = (u_1,u_2, u_3)$ and $v = (v_1,v_2,v_3)$ define $u\times v$ as the unique vector such that</p> <p>$$(u\times v)\cdot a = \left| \matrix{a_1&amp;a_2&amp;a_3\\u_1&amp;u_2&amp;u_3\\v_1&amp;v_2&amp;v_3}\right|$$</p> <p>where $a=(a_1,a_2,a_3)$, the dot represents the dot product and the expression on the right is the determinant of that matrix. </p> <p>From the properties of the determinant it's easy to see that $(u\times v)\cdot u = (u\times v)\cdot v = 0$, hence the cross product is orthogonal to both vectors. </p> <p>From here we have to assume that $u$ and $v$ are linearly independent. The magnitude of the cross product is $|u||v|\sin(\theta)$ where $\theta$ is the angle between $u$ and $v$. So if $\theta = 0$ then the cross product is $0$. In order to get $x+y+z = 1$ simply divide $u\times v$ by $z_1+z_2+z_3$ where $u\times v = (z_1,z_2,z_3)$. </p>
1,672,080
<p>I have troubles understanding the concepts of quotient topology and product topology (in the infinite case). </p> <p>I know that we want to give a topology to new spaces built from the old ones, but the thing is that I can't figure out why is the definition for quotient topology natural since we only require that the canonical projection should be continuos (I think this definition is given tersley), and on the other hand I don't understand why does the box topology doesn't work in the infinite case so we have to define a very special topology where you say that you have infinite tuples where most of them are the space itself so, How could this work?</p> <p>And can you recommend some exercises to put in practice this concepts please.</p> <p>Thanks a lot in advance.</p>
Stahl
62,500
<p>Both of these can be understood via <a href="https://en.wikipedia.org/wiki/Universal_property" rel="nofollow">universal properties</a>. Let's look at the product first.</p> <p>If you want to form the product of two sets $S$ and $T$, what do you do? You form the set $S\times T = \{(s,t)\mid s\in S, t\in T\}$. However, rather than just look at the set $S\times T$ itself, one should think about the properties that $S\times T$ satisfies. You can convince yourself that it in fact satisfies the following:</p> <blockquote> <p>Let $X$ be a set, and let $f : X\to S$ and $g : X\to T$ be two maps of sets. Then there exists a <em>unique</em> map of sets $h : X\to S\times T$ such that $f = \pi_S\circ h$ and $g = \pi_T\circ h$, where $\pi_S : S\times T\to S$ is the natural projection sending $(s,t)$ to $s$ (similarly for $\pi_T$).</p> </blockquote> <p>This can be generalized to create the notion of a product of arbitrarily many sets $\{S_i\}_{i\in I}$: you ask for the product $\prod_{i\in I} S_i$ to be a set such that for any set $X$ with maps $f_i : X\to S_i$ for each $i$, you have a unique map $X\to\prod_{i\in I} S_i$ such that factorizations of each $f_i$ analogous to the above factorization hold.</p> <p>If we categorify, we can replace "set" by "object" and "map of sets" by "morphism" in any category, and get the notion of product in an arbitrary category:</p> <blockquote> <p>An object $X$ in a category is the product of a family $\{X_i\}_{i\in I}$ of objects if and only if there exist morphisms $\pi_i : X \to X_i$ for all $i$, such that for every object $Y$ equipped with morphisms $f_i : Y \to X_i$ for all $i$ there exists a unique morphism $f : Y \to X$ such that $f_i = \pi_i\circ f $ for all $i\in I$. </p> </blockquote> <p>So, replacing "set" by "topological space" and "map of sets" by "continuous map," we obtain the [categorical] definition of the product of topological spaces. You can verify that in fact, the explicit description of the product topology that you know satisfies the universal property I have described, and so deserves to be called the product of topological spaces.</p> <p>Note that the box topology <em>does</em> exist on an infinite product of spaces (that is, the box topology is a topology on the product of spaces considered as a set), but it does not satisfy the universal property described above. In particular, there are "too many" open sets. (As you may have heard, the box topology is finer than the product topology: every open set in the product topology is open in the box topology, but not vice versa. Hence, for any collection of topological spaces $(X_i,\tau_i)_{i\in I}$, the map given by the identity map on sets will be a continuous map $id: (\prod X_i,\tau_{\textrm{box}})\to(\prod X_i,\tau_{\textrm{prod}})$, but it will fail to be continuous in general in the other direction - and this agrees with what we'd expect from the universal property above.)</p> <p>Quotients also have a universal property: given a topological space $(X,\tau)$ and a subspace $S\subseteq X$, the quotient space $X/S$ is defined to be the set $X/\sim$, where $x\sim y$ if $x,y\in S$ or $x = y$. Intuitively, you smash all of $S$ together into a point. There's a natural map of sets $\pi_S : X\to X/S$ given by mapping $x$ to the equivalence class $[x]$ of $x$ in $X/\sim$ (of course, you can similarly define a quotient space given any equivalence relation on a topological space). We still have to give $X/S$ a topology. You can describe it explicitly by looking at inverses of $U\subseteq X/S$ under $\pi_S$ and demanding that $\pi_S$ be continuous in the "most natural way," but you could also define the whole beast $X/S$ by the following universal property (which describes in what way the choice of topology making $\pi_S$ continuous <em>is</em> the most natural way):</p> <blockquote> <p>Given $(X,\tau)$ and $S\subseteq X$ as above, the quotient space $X/S$ (if it exists) is a topological space $(X/S,\tau_{quot})$ along with a continuous surjective map $\pi : X\to X/S$ such that for any continuous map $f : X\to Y$ such that $S$ is mapped to a single point under $S$, $f$ factors uniquely as $f = \tilde{f}\circ\pi$, with $\tilde{f} : X/S\to Y$ continuous.</p> </blockquote> <p>The best way to understand these beasts is to look at many examples. You might start by trying to verify that the explicit descriptions given to you actually satisfy the universal properties I've stated, or by trying to generalize the universal property of the quotient space I've given (it's not the most general definition you can make). If you are ever asked to show something is a quotient or product, a way to do it is to show that the object satisfies the correct universal property: essentially by definition, these properties classify objects <em>uniquely up to unique isomorphism</em> (which is stronger than just up to isomorphism!).</p>
796,199
<p>As far as I know, Brent's method for root finding is said to have superlinear convergence, but I haven't been able to find any more concrete information.</p> <p>Is its convergence rate known to be at least bounded between some known values?</p> <p>What is a good bibliographic reference for that?</p> <p>[EDIT]</p> <p>Also, another related question (I add it here because it is closely related to the previous one): How many calls to the function makes Brent's method per iteration, on average?</p> <p>[EDIT]</p> <p>Thanks to a comment by @Barry Cipra, I've reviewed the original source (Brent, 1971).</p> <p>This gave me an answer to one of my two questions:</p> <ul> <li>Brent's algorithms calls the function whose root is to be found once per iteration.</li> </ul> <p>The first question I posted remains open to me, as I am not an expert. As far as I understand, Brent's algorithm combines bisection with inverse quadratic interpolation. Bisection convergence is known to be linear, but I don't know about the convergence rate of inverse quadratic interpolation.</p> <p>I guess the convergence rate of Brent's method can be considered to be bounded between linear and that of inverse quadratic interpolation. So, the remaining question is: What is the convergence rate of inverse quadratic interpolation?</p>
hardmath
3,111
<p>Brent proposed his method as combining bisection steps, with guaranteed linear convergence, with <a href="http://en.wikipedia.org/wiki/Inverse_quadratic_interpolation#Behaviour" rel="noreferrer">inverse quadratic interpolation</a>, whose order of convergence is the positive root of:</p> <p>$$ \mu^3 - \mu^2 - \mu - 1 = 0 $$</p> <p>Thus $\mu \approx 1.839$. We can compare this with the "golden section" order of convergence of the <a href="http://en.wikipedia.org/wiki/Secant_method#Convergence" rel="noreferrer">secant method</a>, the positive root of:</p> <p>$$ \phi^2 - \phi - 1 = 0 $$</p> <p>or $\phi \approx 1.618 $ on one hand, and the order of convergence 2 of <a href="http://en.wikipedia.org/wiki/Newton%27s_method" rel="noreferrer">Newton's method</a> on the other.</p> <p>Of course there are trade-offs involved. Inverse quadratic interpolation requires only one new function evaluation per step, like the secant method, but uses a more complicated formula to update the root approximation, and inverse quadratic interpolation avoids the evaluation of a derivative as Newton's method requires.</p>
2,027,044
<p>Prove: $$ (a+b)^\frac{1}{n} \le a^\frac{1}{n} + b^\frac{1}{n}, \qquad \forall n \in \mathbb{N} $$ I have have tried using the triangle inequlity $ |a + b| \le |a| + |b| $, without any success.</p>
Dominik
259,493
<p>I assume that $a$ and $b$ are supposed to be positive? Then this inequaltiy is equivalent to $a + b \le (a^{1/n} + b^{1/n})^n$, which follows immediately from expanding the term on the right-hand side with the binomial theorem.</p>
917,276
<p>If $U$ and $V$ are independent identically distributed standard normal, what is the distribution of their difference?</p> <p>I will present my answer here. I am hoping to know if I am right or wrong.</p> <p>Using the method of moment generating functions, we have</p> <p>\begin{align*} M_{U-V}(t)&amp;=E\left[e^{t(U-V)}\right]\\ &amp;=E\left[e^{tU}\right]E\left[e^{tV}\right]\\ &amp;=M_U(t)M_V(t)\\ &amp;=\left(M_U(t)\right)^2\\ &amp;=\left(e^{\mu t+\frac{1}{2}t^2\sigma ^2}\right)^2\\ &amp;=e^{2\mu t+t^2\sigma ^2}\\ \end{align*} The last expression is the moment generating function for a random variable distributed normal with mean $2\mu$ and variance $2\sigma ^2$. Thus $U-V\sim N(2\mu,2\sigma ^2)$.</p> <p>For the third line from the bottom, it follows from the fact that the moment generating functions are identical for $U$ and $V$.</p> <p>Thanks for your input.</p> <p>EDIT: OH I already see that I made a mistake, since the random variables are distributed STANDARD normal. I will change my answer to say $U-V\sim N(0,2)$.</p>
Qaswed
333,427
<p>In addition to the solution by the OP using the moment generating function, I'll provide a (nearly trivial) solution when <a href="https://en.wikipedia.org/wiki/Sum_of_normally_distributed_random_variables" rel="noreferrer">the rules about the sum</a> and <a href="https://en.wikipedia.org/wiki/Normal_distribution#Operations_on_normal_deviates" rel="noreferrer">linear transformations of normal distributions</a> are known.</p> <p>The distribution of $U-V$ is identical to $U+a \cdot V$ with $a=-1$. So from the cited rules we know that $U+V\cdot a \sim N(\mu_U + a\cdot \mu_V,~\sigma_U^2 + a^2 \cdot \sigma_V^2) = N(\mu_U - \mu_V,~\sigma_U^2 + \sigma_V^2)~ \text{(for $a = -1$)} = N(0,~2)~\text{(for standard normal distributed variables)}$.</p> <hr> <p>Edit 2017-11-20: After I rejected the correction proposed by @Sheljohn of the variance and one typo, several times, he wrote them in a comment, so I finally did see them. Thank you @Sheljohn!</p>
348,748
<p>Find the solution for $Ax=0$ for the following $3 \times 3$ matrix:</p> <p>$$\begin{pmatrix}3 &amp; 2&amp; -3\\ 2&amp; -1&amp;1 \\ 1&amp; 1&amp; 1\end{pmatrix}$$</p> <p>I found the row reduced form of that matrix, which was </p> <p>$$\begin{pmatrix}1 &amp; 2/3&amp; -1\\ 0&amp; 1&amp;-9/7 \\ 0&amp; 0&amp; 1\end{pmatrix}$$</p> <p>I'm not sure what I'm supposed to do next to find the "unique" solution besides $x=0$? Do I further reduce that matrix to the identity matrix?</p>
Community
-1
<p>First note that any linear system of the form $\mathbf{Ax} = \mathbf{0}$ has either one solution (which is $\mathbf{x} = \mathbf{0}$) or infinite solutions. In your case, once you reduce $\mathbf{A}$ to row-echelon form, none of the entries on the leading diagonal are zero. This means your matrix is invertible.</p> <p>Hence, the only solution is $\mathbf{x} = \mathbf{0}$</p>
24,055
<p>Running this code:</p> <pre><code>Histogram[{RandomVariate[NormalDistribution[1/4,0.12],100], RandomVariate[NormalDistribution[3/4, 0.12], 100]}, Automatic, "Probability", PlotRange -&gt; {{0, 1}, {0, 1}}, Frame -&gt; True, PlotRangeClipping -&gt; True, FrameLabel -&gt; {Style["x axis", 15], Style["probability", 15]} ] </code></pre> <p>Gives me the following plot:</p> <p><img src="https://i.stack.imgur.com/jYSLN.png" alt="enter image description here"></p> <p>As you can see, the label on the right ("probability") is not printed correctly. The character "y" is missing. What's going on here?</p> <p>I am using Mathematica 9.0.0.0. I ran this on two laptops, one with Windows 7 and the other with Windows 8.</p> <p><strong>Update</strong>: Judging by the comments, this seems to be a bug. So now the question becomes: <strong>Is there a workaround?</strong></p> <p><strong>Update</strong>: This seems to be bug, so I'll tag as such. In the meantime, see the answers for workarounds.</p>
Mr.Wizard
121
<p>This seems to be related to, or a manifestation of:<br> <a href="https://mathematica.stackexchange.com/q/18988/121">Poor anti-aliasing in Rotated text with ClearType on</a></p> <p>On my system Simon's workaround is successful.<br> Using <code>Style["probability", 15, FontOpacity -&gt; 0.999]</code>:</p> <p><img src="https://i.stack.imgur.com/TAHyv.png" alt="enter image description here"></p>
1,005,291
<p>I understand that in order to prove this to be one to one, I need to prove $2$ numbers, $a$ and $b$, in the same set are equal. </p> <p>This is what I did:</p> <p>$$\sqrt{a} + a + 2 = \sqrt{b} + b + 2$$ $$\sqrt{a} + a = \sqrt{b} + b$$ $$a + a^2 = b + b^2$$</p> <p>How would I arrive at $a = b$? Is it possible?</p>
John
105,625
<p>$f(x)=\sqrt{x}+x+2$ is strictly increasing on $(0,\infty)$, so it's one-one (suppose not, use the strict monotonicity to draw a contradiction).</p> <p>Or, from your second step,</p> <p>$$\sqrt{a} + a = \sqrt{b} + b\iff \sqrt{a} - \sqrt{b}+ a -b=0 \iff(\sqrt{a} - \sqrt{b})(1+\sqrt{a}+\sqrt{b})=0 $$</p> <p>Since $1+\sqrt{a}+\sqrt{b}&gt;0$, we have $\sqrt{a} - \sqrt{b}=0$.</p>
1,242,001
<p>The following is the notation for Fermat's Last Theorem </p> <p>$\neg\exists_{\{a,b,c,n\},(a,b,c,n)\in(\mathbb{Z}^+)\color{blue}{^4}\land n&gt;2\land abc\neq 0}a^n+b^n=c^n$ </p> <p>I understand everything in the notation besides the 4 highlighted in blue. Can someone explain to me what this means?</p>
Kraxxus
304,794
<p>If you have a complicated function $g(x)$ involving a lot of products and exponentials ask <a href="https://en.wikipedia.org/wiki/John_Napier" rel="nofollow">John Napier</a> for help. He would tell you to make it even more complicated $f(x)=\log(g(x))$. Then by the chain rule $f'(x)=\frac{g'(x)}{g(x)}$ or $g'(x)=g(x)\cdot f'(x)$.</p> <p>What would be the gain? Well $\log(g(x))$ might be easier to differentiate. In your case $\log(g(x))=\log(1/\sqrt{2\pi})-(x-2)^2/2$ and its derivative is $-(x-2)$ so that $g'(x)=g(x)\cdot(2-x)$</p>
654,617
<p>$v$ being a vector. I never understood what they mean and haven't found online resources. Just a quick question.</p> <p>Thought it was absolute and magnitude respectively when regarding vectors. need confirmation</p>
imranfat
64,546
<p>The double bar indicates the magnitude of the vector. In essence algebraically that is still the absolute value, meaning the square root of $x^2+y^2$ (in case of 2D) </p>
3,306,747
<p>Here is my attempt </p> <p>h = 3k -7 ----(1)</p> <p>(h-1)^2 + (k -1)^2 = 10/4</p> <p>(h-1)^2 + (3h - 8)^2 = 10/4</p> <p>This second one doesn't working.Is my approch wrong?</p> <p>P.S: Sorry for the typo.Also I assumed the center is C(h,k)</p>
Ali Shadhar
432,085
<p>From <a href="https://de.wikibooks.org/wiki/Formelsammlung_Mathematik:_Reihenentwicklungen#Potenzen_des_Arkussinus" rel="nofollow noreferrer">here</a>, we have </p> <p><span class="math-container">$$\frac{\arcsin z}{\sqrt{1-z^2}}=\sum_{n=1}^\infty\frac{(2z)^{2n-1}}{n{2n \choose n}}$$</span></p> <p>substitute <span class="math-container">$z=\sqrt{y}$</span>, we get</p> <p><span class="math-container">$$\sum_{n=1}^\infty\frac{4^ny^n}{n{2n \choose n}}=2\sqrt{y}\frac{\arcsin\sqrt{y}}{\sqrt{1-y}}$$</span></p> <p>Now multiply both sides by <span class="math-container">$-\frac{\ln(1-y)}{y}$</span> then integrate from <span class="math-container">$y=0$</span> to <span class="math-container">$1$</span> and using the fact that <span class="math-container">$-\int_0^1 y^{n-1}\ln(1-x)\ dy=\frac{H_n}{n}$</span>, we get</p> <p><span class="math-container">\begin{align} \sum_{n=1}^\infty\frac{4^nH_n}{n^2{2n \choose 2}}&amp;=-2\int_0^1\frac{\arcsin\sqrt{y}}{\sqrt{y}\sqrt{1-y}}\ln(1-y)\ dy\overset{\arcsin\sqrt{y}=x}{=}-8\int_0^{\pi/2}x\ln(\cos x)\ dx\\ &amp;=-8\int_0^{\pi/2}x\left\{-\ln2-\sum_{n=1}^\infty\frac{(-1)^n\cos(2nx)}{n}\right\}\ dx\\ &amp;=\pi^2\ln2+8\sum_{n=1}^\infty\frac{(-1)^n}{n}\int_0^{\pi/2}x\cos(2nx) dx\\ &amp;=\pi^2\ln2+8\sum_{n=1}^\infty\frac{(-1)^n}{n}\left(\frac{\pi\sin(n\pi)}{4n}+\frac{\cos(n\pi)}{4n^2}-\frac1{4n^2}\right)\\ &amp;=\pi^2\ln2+2\pi\sum_{n=1}^\infty\frac{(-1)^n\sin(n\pi)}{n^2}+2\sum_{n=1}^\infty\frac{(-1)^n\cos(n\pi)}{n^3}-2\sum_{n=1}^\infty\frac{(-1)^n}{n^3}\\ &amp;=\pi^2\ln2+0+2\sum_{n=1}^\infty\frac{(-1)^n(-1)^n}{n^3}-2\operatorname{Li}_3(-1)\\ &amp;=\pi^2\ln2+2\zeta(3)-2\left(-\frac34\zeta(3)\right)\\ &amp;=\pi^2\ln2+\frac72\zeta(3) \end{align}</span></p>
277,250
<p>Let $\mathbb{N}$ be the set of natural numbers and $\beta \mathbb N$ denotes the Stone-Cech compactification of $\mathbb N$. </p> <p>Is it then true that $\beta \mathbb N\cong \beta \mathbb N \times \beta \mathbb N $ ? </p>
YCor
14,094
<p>The negative answer is equivalent to showing that there are two disjoint subsets $A,B$ of $\mathbf{N}^2$ with non-disjoint closures in $(\beta\mathbf{N})^2$. This be made explicit: take $A=\{(n,m):n=m\}$ and $B=\{(n,m):n&gt;m\}$. Let $\omega$ be a non-principal ultrafilter on $\mathbf{N}$. Let $V$ be a neighborhood of $(\omega,\omega)$ in $(\beta\mathbf{N})^2$. Then $V$ contains $U\times U$ for some $U\in\omega$. The latter contains both points of $A$ and of $B$. Since this holds for every $V$, this shows that $(\omega,\omega)$ belongs to the closure of both $A$ and $B$.</p>
277,250
<p>Let $\mathbb{N}$ be the set of natural numbers and $\beta \mathbb N$ denotes the Stone-Cech compactification of $\mathbb N$. </p> <p>Is it then true that $\beta \mathbb N\cong \beta \mathbb N \times \beta \mathbb N $ ? </p>
M.González
39,421
<p>An indirect argument: </p> <p>Since the Banach space of continuous functions $C(\beta\mathbb{N})$ is isomorphic to $\ell_\infty$, it contains no complemented copies of $c_0$. </p> <p>Since $C(\beta\mathbb{N}\times\beta\mathbb{N})$ is isomorphic to $C\big(\beta \mathbb{N},C(\beta\mathbb{N})\big)$, it contains a complemented copy of $c_0$. See [P. Cembranos. <a href="https://doi.org/10.1090/S0002-9939-1984-0746089-2" rel="nofollow noreferrer">$C(K,E)$ contains a complemented copy of $c_0$</a>. Proc. Amer. Math. Soc. 91 (1984), 556-558.] </p>
2,050,724
<p>In the proof of Jensen's inequality in a probabilistic setting, the book gives the following demonstration: <br> Expand the Taylor series of $f(x)$ around $\mu =\mathbb {E}[X]$. $$f(x)=f(\mu)+f'(\mu)(x-\mu)+\frac {f''(\epsilon)(x-\mu)^2}{2}$$ For $\epsilon$ between $x$ and $\mu$. Since $f$ is convex, $f''(\epsilon)\ge 0$ therefore $f(X)\ge f(\mu)+f'(\mu)(X-\mu)$. Taking expectations of both sides gives $$\mathbb{E}[f(X)]\ge f(\mu)$$ The part I do not understand is how they truncate the Taylor series to only three terms and are still able to say that the truncated expression equals $f(x)$. In my head, this should only be an approximation. <br> I suppose one must give $\epsilon$ as a function of $x$ but the justification if this step still eludes me, and if there are terms missing in the expansion, then the inequality does not necessarily follow. <br> How does justify the step of truncating the Taylor series to only three terms while still maintaining equality?</p>
Sam Blattner
397,101
<p>Notice that the last term is "for $\epsilon$ between $x$ and $\mu$." That is the error term, so it isn't an approximation. It is exact.</p> <p>The error term is given by the mean value theorem. </p>
2,050,724
<p>In the proof of Jensen's inequality in a probabilistic setting, the book gives the following demonstration: <br> Expand the Taylor series of $f(x)$ around $\mu =\mathbb {E}[X]$. $$f(x)=f(\mu)+f'(\mu)(x-\mu)+\frac {f''(\epsilon)(x-\mu)^2}{2}$$ For $\epsilon$ between $x$ and $\mu$. Since $f$ is convex, $f''(\epsilon)\ge 0$ therefore $f(X)\ge f(\mu)+f'(\mu)(X-\mu)$. Taking expectations of both sides gives $$\mathbb{E}[f(X)]\ge f(\mu)$$ The part I do not understand is how they truncate the Taylor series to only three terms and are still able to say that the truncated expression equals $f(x)$. In my head, this should only be an approximation. <br> I suppose one must give $\epsilon$ as a function of $x$ but the justification if this step still eludes me, and if there are terms missing in the expansion, then the inequality does not necessarily follow. <br> How does justify the step of truncating the Taylor series to only three terms while still maintaining equality?</p>
Fozz
341,955
<p>In general, if we have an <a href="https://en.wikipedia.org/wiki/Analytic_function" rel="nofollow noreferrer">analytic</a> function $f$, we can write out the Taylor series for $f$ around a point $x$ by $$f(y)=\sum_{i=0}^\infty f^{(i)}(x)\frac{(y-x)^i}{i!}$$ and we'd have equality for all $y$ in some neighborhood of $x$. In general, when we don't have an analytic function, and say we just have a twice differentiable function (as in your case), the series above doesn't even make sense since we don't necessarily have $f^{(i)}(x)$ defined for $i=3,4,5...$ However it's still always true that for every $y$ is some neighborhood of $x$,we can find $z$ in between $x$ and $y$ (i.e. if $x\leq y$ then $x\leq z\leq y$ and if $x\geq y$ then $x\geq z\geq y$) such that $$f(y)=f(x)+f'(x)(y-x)+\frac{f''(z)}{2}(y-x)^2.$$ The equality is exact, but just for our choice of $y$. We'd (maybe) have a different $z$ for a different choice of $y$ near $x$. In the proof of Jensen's inequality, the fact that $z$ may vary doesn't pose a problem because we just use the fact that $f''$ is nonnegative.</p>
1,289,626
<blockquote> <p>find the Range of $f(x) = |x-6|+x^2-1$</p> </blockquote> <p>$$ f(x) = |x-6|+x^2-1 =\left\{ \begin{array}{c} x^2+x-7,&amp; x&gt;0 .....(b) \\ 5,&amp; x=0 .....(a) \\ x^2-x+5,&amp; x&lt;0 ......(c) \end{array} \right. $$</p> <p>from eq (b) i got $$f(x)= \left(x+\frac12\right)^2-\frac{29}4 \ge-\frac{29}4$$<br> and from eq (c) i got $$f(x)= \left(x-\frac12\right)^2+\frac{19}4 \ge\frac{19}4$$<br></p> <p>and eq(b) tells me that it also passes through 5 and so generalize all this and found its range is $\left[-\frac{29}4 , \infty\right)$</p> <p>but the graph says its range is $(5, \infty)$</p>
Surb
154,545
<p>$$f'(x)=\frac{x-6}{|x-6|}+2x$$</p> <p>$f'(x)=0\iff x=\frac{1}{2}$ and $f'(x)&lt;0$ if $x&lt;\frac{1}{2}$ and $f'(x)&gt;0$ if $x&gt;\frac{1}{2}$, therefore the range is $[f(\frac{1}{2}),+\infty [$.</p>
3,371,888
<p><span class="math-container">$$\left(\!\!{{a+b}\choose k}\!\!\right)= \sum_{j=0}^k \left(\!\!{a\choose j}\!\!\right) \cdot \left(\!\!{b\choose {k-j}}\!\!\right)$$</span></p> <p>I am quite confused about the case of multichoose. I was able to prove this equation if only "n choose k" form was used as both sides would be the k-th coefficients of <span class="math-container">$(1+x)^{a+b}$</span>.</p> <p>Any help to understand this would be very appreciated. </p>
Certainly not a dog
691,550
<p>Consider the ways to choose any <span class="math-container">$k$</span> objects from two piles (of size <span class="math-container">$a$</span> and <span class="math-container">$b$</span>).</p> <p>One way is to simply combine the piles and choose them (the ways to do this is <span class="math-container">$\binom{a+b}k$</span>, a.k.a. the LHS). </p> <p>Another way is to first choose some, say, <span class="math-container">$j$</span> objects from pile <span class="math-container">$a$</span> (can be done in <span class="math-container">$\binom aj$</span> ways) and then choose the remaining <span class="math-container">$k-j$</span> objects from pile <span class="math-container">$b$</span> (can be done in <span class="math-container">$\binom b{k-j}$</span> ways, so this operation may be done in <span class="math-container">$\binom aj \binom b{k-j}$</span> ways). Adding all the cases for the different <span class="math-container">$j$</span>s we get <span class="math-container">$\sum_{j=0}^k \binom aj \binom b{k-j}$</span>. From the equivalence of these processes we get the result. </p>
822,711
<p>In case of Riemannian geometry the connection $\Gamma^i_{jk}$ as is derived from the derivatives of the metric tensor $g_{ij}$ is ought to be symmetric wrt to its lower two indices. But in the case of Non-Riemannian Geometry that need not be the case, so the question is how do you actually construct such connections? Do you again use the metric tensor?</p>
PascExchange
311,814
<p>A connection is an abstract non-unique $\mathbb{R}$-linear map on a smooth manifold $M$ with tangent bundle $TM$ and smooth vector fields $\Gamma(TM)$ given by $$\nabla:\begin{cases}\Gamma(TM)\times \Gamma(TM)\to \Gamma(TM)\\ (X,Y)\mapsto \nabla_XY\end{cases}$$ satisfying the following properties</p> <ol> <li>$\nabla_{fX}Y=f\nabla_XY$ for all $f\in C^\infty(M)$</li> <li>$\nabla_X(fY)=X(f)Y+f\nabla_XY$</li> <li>$\nabla_XY-\nabla_YX=[X,Y]$</li> </ol> <p>It is a fact that the zero map doesn't satify above properties, but that if one has two connections $\nabla^1,\nabla^2$ one can create another by $g\nabla^1+(1-g)\nabla^2$. When looking at Riemannian manifolds $(M,g)$ we get a unique connection called Levi-Civita Connection which in addition satisfies what one calles metric compatibility $$Xg(Y,Z)=g(\nabla_XY,Z)+g(Y,\nabla_XZ)$$ The proof mainly consists in looking at all cyclic permutations of above equations and than using a smart way in combining them to get an explicit formula which shows existance and uniqueness and satisfies the above properties. This formula is $$2g(\nabla_XY,Z)=Xg(Y,Z)+Yg(Z,X)-Zg(X,Y)+g([X,Y],Z)-g([X,Z],Y)-g([Y,Z],X)$$</p> <p>What you denoted as connection is in fact not a connection but the so called Christoffel Symbols obtained from the Levi-Civita connection via $$\nabla_{\partial_j}\partial_k=\sum\limits_{j,k}\Gamma^i_{jk}\partial_i$$ where $\partial_i,\partial_j,\partial_k$ are defined by taking a chart $(U,,\varphi)$ at a point $p\in M$ and setting $$\partial_i(p)(f)=\frac{\partial (f\circ\varphi^{-1})}{\partial x_i}(\varphi(p))$$ This is used to express a vector field in local coordinates $X(p)=\sum\limits_iX_i(p)\partial_i(p)$ and gives an expression of the Levi-Civita connection in local coordinates $$\nabla_XY=\sum\limits_{i=1}^m\left(\sum_jX_j\partial_jY_i+\sum\limits_{j,k}\Gamma_{jk}^iX_jY_k\right)\partial_i$$ Like this we realize, that the Christoffel Symbol can be interpret as a measure of how strong the derivative of a vector field along another vector field on a Riemannian manifold deviates from the standart partial derivative on $\mathbb{R}^n$.</p>
4,298,951
<p>Let us define a sequence <span class="math-container">$(a_n)$</span> as follows:</p> <p><span class="math-container">$$a_1 = 1, a_2 = 2 \text{ and } a_{n} = \frac14 a_{n-2} + \frac34 a_{n-1}$$</span></p> <p>Prove that the sequence <span class="math-container">$(a_n)$</span> is Cauchy and find the limit.</p> <hr /> <p>I have proved that the sequence <span class="math-container">$(a_n)$</span> is Cauchy. But unable to find the limit. I have observed that the sequence <span class="math-container">$(a_n)$</span> is decreasing for <span class="math-container">$n \ge 2$</span>.</p>
daㅤ
799,923
<p>Rewrite <span class="math-container">$a_n$</span> as <span class="math-container">$$a_1=1,\ a_2=2,\ a_{n+2}=\dfrac{3}{4}a_{n+1}+\dfrac{1}{4}a_n \mathrm{\ for\ } n\geqq 1.$$</span> We can get <span class="math-container">\begin{align} &amp;a_{n+2}-a_{n+1}=-\dfrac{1}{4}(a_{n+1}-a_{n}) \cdots (A)\\ &amp;a_{n+2}+\dfrac{1}{4}a_{n+1}=a_{n+1}+\dfrac{1}{4}a_{n} \cdots (B) \end{align}</span></p> <p>Letting <span class="math-container">$b_n=a_{n+1}-a_n$</span>, we get <span class="math-container">$b_{n+1}=-\dfrac{1}{4}b_n$</span> from <span class="math-container">$(A)$</span>, thus <span class="math-container">$\{b_n \}$</span> is a geometric progression of ratio <span class="math-container">$-\dfrac{1}{4}$</span>. Thus <span class="math-container">$b_n=b_1 \cdot (-\frac{1}{4})^{n-1}=(-\frac{1}{4})^{n-1}$</span>. Therefore <span class="math-container">$$a_{n+1}-a_n=\left(-\frac{1}{4}\right)^{n-1} \cdots (C)$$</span></p> <p>Next, letting <span class="math-container">$c_n=a_{n+1}+\dfrac{1}{4}a_n$</span>, we get <span class="math-container">$c_{n+1}=c_n$</span> from <span class="math-container">$(B)$</span>. This means that all terms of <span class="math-container">$\{c_n \}$</span> are equal, so <span class="math-container">$c_n=c_1=\dfrac{9}{4}$</span>. Thus <span class="math-container">$$a_{n+1}+\dfrac{1}{4}a_n=\dfrac{9}{4} \cdots (D)$$</span></p> <p>Calculating <span class="math-container">$(D)-(C)$</span>, we get <span class="math-container">$\dfrac{5}{4}a_n=\dfrac{9}{4}-\left(-\dfrac{1}{4}\right)^{n-1},$</span> i.e., <span class="math-container">$a_n=\dfrac{9}{5}-\dfrac{4}{5}\left(-\dfrac{1}{4}\right)^{n-1}$</span>.</p> <p>Letting <span class="math-container">$n\to \infty,$</span> we get <span class="math-container">$\displaystyle\lim_{n\to \infty} a_n =\dfrac{9}{5}.$</span></p>
355,888
<p>Consider $x''-2x'+x= te^t$</p> <p>Determine the solution with initial values $x(1) = e,$ $x'(1) = 0.$</p> <p>I know this looks like and probably is a very easy question, but i'm not getting the right answer when i try and solve putting into quadratic form. Could someone please demonstrate or show me a different method? </p> <p>Many thanks :)</p>
Ron Gordon
53,268
<p>This is actually a tricky problem because the right-hand side is a solution of the left-hand side set to zero (the homogeneous solution). </p> <p>The homogeneous solution $x^{(H)}$ is </p> <p>$$x^{(H)}(t) = A e^{t} + B t e^{t}$$</p> <p>This is because the characteristic equation has $1$ as a double solution, so we have to put a secular component $t$ onto one of the solutions.</p> <p>This makes finding the particular solution $x^{(P)}$ difficult because it is a solution to the homogeneous equation. The way around this is to assume that</p> <p>$$x^{(P)} = C t^3 e^{t}$$</p> <p>and solve for $C$:</p> <p>$$6 C t e^{t} = t e^{t} \implies C = \frac{1}{6}$$</p> <p>Then solve for $A$ and $B$ using the initial conditions.</p>
1,128,414
<blockquote> <p>Let $F:C[0,2]\to C[0,2]$ be the map defined by $(F(f))(x)=x^2f(x)$. Show that $F$ is continuous as a function from $(C[0,2],\|\cdot\|_{\sup})$ to $(C[0,2],\|\cdot\|_{2})$.</p> </blockquote> <p>I read this solution:</p> <blockquote> <p>Let $f\in C[0,2]$. Let $\epsilon&gt;0$. Choose $\delta=\epsilon/(4\sqrt{2})$. If $\|g-f\|_{\sup}&lt;\delta$ we have $$ \|F(g)-F(f)\|_2 = \left(\int_0^2 x^4(f(x)-g(x))^2\,dx\right)^{1/2} \le 2^2\sqrt{2}\|g-f\|_\sup&lt;4\sqrt{2}\delta=\epsilon $$</p> </blockquote> <p>I'm confused how the marker got the inequality from $$\left(\int_0^2 x^4(f(x)-g(x))^2 \ dx \right)^{1/2} \leq 2^2 \sqrt{2} ||g-f||_{sup}$$</p> <p>Could someone please explain this step for me, and how they got the motivation to set $\delta = \epsilon/(4\sqrt{2})$? </p>
Brian Fitzpatrick
56,960
<p>Here $T:P_1\to P_1$ is defined by $T(a+b\,x)=6\,(a-b)+(12\,a-11\,b)\,x$ and $\beta$ is the basis $\{p,q\}$ for $P_1$ where $p(x)=3+4\,x$ and $q(x)=2+3\,x$.</p> <p>Note that $$ \begin{array}{rcrcr} T(p) &amp; = &amp; \color{red}{-2}\,p &amp; + &amp; \color{blue}{0}\,q \\ T(q) &amp; = &amp; \color{green}{0}\,p &amp; + &amp; (\color{purple}{-3})\,q \end{array} $$ This implies $$ [T]_\beta= \begin{bmatrix} \color{red}{-2} &amp; \color{green}{0} \\ \color{blue}{0} &amp; \color{purple}{-3} \end{bmatrix} $$</p>
1,458,579
<p>Suppose $f_n$ and $g_n$ be two sequences of functions. Also, $f_n.g_n$ converges to $f.g$ and $g_n$ converges to $g$. Can we prove $f_n$ converges to $f$? How?</p>
André Nicolas
6,312
<p>Remark: Thanks to OP for correcting a serious error I made.</p> <p>The probability the first collection starts with $1$ is $p$, and therefore the probability the second collection starts with $0$ is $p$, and the probability it starts with $1$ is $1-p$.</p> <p><strong>Given</strong> the first bit in the second collection is $0$, the expected number of <strong>additional</strong> $0$'s in the collection is $\frac{1}{p}-1$, for a total of $\frac{1}{p}$. A similar analysis works for first bit in the collection a $1$ bit.</p> <p>So the expectation is $p\cdot\frac{1}{p}+(1-p)\cdot\frac{1}{1-p}$.</p>
197,441
<p>I have a list,</p> <pre><code>l1 = {{a, b, 3, c}, {e, f, 5, k}, {n, k, 12, m}, {s, t, 1, y}} </code></pre> <p>and want to apply differences on the third parts and keep the parts right of the numerals collected.</p> <p>My result should be</p> <pre><code>l2 = {{2, c, k}, {7, k, m}, {-11, m, y}} </code></pre> <p>I tried Map and MapAt, but I could not get anywhere. I could work around split things up and connect again. But is there a better way to do it?</p>
Roman
26,598
<p>This is very similar to kglr's first solution but picks the relevant quantities a bit more explicitly:</p> <pre><code>l2 = BlockMap[{#[[2, 3]] - #[[1, 3]], #[[1, 4]], #[[2, 4]]} &amp;, l1, 2, 1] </code></pre> <blockquote> <p>{{2, c, k}, {7, k, m}, {-11, m, y}}</p> </blockquote> <p>With a parameter to change the symbolic column quickly:</p> <pre><code>l2 = With[{col = 3}, BlockMap[{#[[2,col]] - #[[1,col]], #[[1,col+1]], #[[2,col+1]]} &amp;, l1, 2, 1]] </code></pre> <blockquote> <p>{{2, c, k}, {7, k, m}, {-11, m, y}}</p> </blockquote>
197,441
<p>I have a list,</p> <pre><code>l1 = {{a, b, 3, c}, {e, f, 5, k}, {n, k, 12, m}, {s, t, 1, y}} </code></pre> <p>and want to apply differences on the third parts and keep the parts right of the numerals collected.</p> <p>My result should be</p> <pre><code>l2 = {{2, c, k}, {7, k, m}, {-11, m, y}} </code></pre> <p>I tried Map and MapAt, but I could not get anywhere. I could work around split things up and connect again. But is there a better way to do it?</p>
Edmund
19,542
<p>A solution with <code>MapThread</code> on an offset <code>Partition</code>.</p> <pre><code>MapThread[Sequence @@ #@#2 &amp;, {{Differences, Identity}, Transpose@#}] &amp; /@ Partition[l1[[All, 3 ;;]], 2, 1] </code></pre> <blockquote> <pre><code>{{2, c, k}, {7, k, m}, {-11, m, y}} </code></pre> </blockquote> <p><code>Differences</code> is applied to the integers while <code>Identity</code> preserves the form of the symbols.</p> <p>Hope this helps.</p>
1,223,823
<p>How can one simplify $$\arctan\left(\frac{1}{\tan \alpha}\right)?$$ $0&lt;α&lt;\dfrac{\pi}{2}.$ Here is what I tried so far, $$\arctan\left(\dfrac{1}{\tan \alpha}\right)=θ$$ for some θ. $$\frac{1}{\tan \alpha}=\tan(θ)$$</p> <p>I didn't know what to do next because there is no significant relationship between ${θ}$ and ${α}.$<br> I am stuck right here if there is some relation between θ and $\alpha$ that would make it a lot simpler.</p>
Christian Blatter
1,303
<p>When $\alpha=n\pi$ then $\arctan{1\over\tan\alpha}$ is undefined. Therefore we may assume that $$-{\pi\over2}&lt;\beta:=\bigl(n+{1\over2}\bigr)\pi-\alpha&lt;{\pi\over2}$$ for a certain $n\in{\mathbb Z}$. I claim that $$\arctan{1\over\tan\alpha}=\beta\ ,$$ whereby $\alpha=\bigl(n+{1\over2}\bigr)\pi$, i.e., $\tan\alpha=\pm\infty$, corresponds to $\beta=0$ in a natural way.</p> <p><em>Proof.</em> One has $-{\pi\over2}&lt;\beta&lt;{\pi\over2}$ by definition. When $0&lt;\beta&lt;{\pi\over2}$ then ${\pi\over2}-\beta=\alpha- n\pi$, so that we obtain $$\tan\beta={1\over\tan\bigl({\pi\over2}-\beta\bigr)}={1\over\tan\alpha}\ ,$$ as required. Similarly one argues for $-{\pi\over2}&lt;\beta&lt;0$.</p>
1,378,633
<p>It seems that some, especially in electrical engineering and musical signal processing, describe that every signal can be represented as a Fourier series.</p> <p>So this got me thinking about the mathematical proof for such argument.</p> <p>But even after going through some resources about the Fourier series (which I don't have too much background in, but grasp the concept), I cannot find a mathematical proof for whether every function can be represented by a Fourier series. There was a hint about the function having to be periodic.</p> <p>So that means that the "every function can be represented as a Fourier series" is a myth and it doesn't apply on signals either, unless they're periodic?</p> <p>But then I can also find references like these: <a href="http://msp.ucsd.edu/techniques/v0.11/book-html/node171.html" rel="noreferrer">http://msp.ucsd.edu/techniques/v0.11/book-html/node171.html</a> that say/imply that every signal can be made periodic? So does that change the notion about whether Fourier series can represent every function, with the new condition of first making it periodic, if necessary?</p>
Rousan
696,544
<p>Well there are 3 conditions for a Fourier Series of a function to be exist: 1. It has to be periodic. 2. It must be single valued, continuous.it can have finite number of finite discontinuities. 3. It must have only a finite number of Maxima and minima within the period. 4. The Integral over one period of |f(X)| must converge. Each of them have Analytical proofs but let's discuss them using analogy. 1). See that Fourier series can be written as complex exponent and they represent circles(like this one <a href="https://www.youtube.com/watch?v=55y13PF0uSg&amp;feature=share" rel="nofollow noreferrer">https://www.youtube.com/watch?v=55y13PF0uSg&amp;feature=share</a> ), see as circles are bounded curve , the vectors connected with them and the output point must come back on the same path for a larger time , hence they must be periodic (well for non-periodic functions it also works but you will only get the result for a particular region). 2).The continuous part is clear I hope so. For the discontinuous part, you have to remember that they never go to infinity as if it does than at that part the curve goes forever and as circular motions are periodic , the expansion must have a finite bound (in a region) In a similar way we can get the Intitution of other 2 points.</p>
1,923,034
<p>A bagel store sells six different kinds of bagels. Suppose you choose 15 bagels at random. What is the probability that your choice contains at least one bagel of each kind? If one of the bagels is Sesame, what is the probability that your choice contains at least three Sesame bagels?</p> <p>My approach to the first problem was the equation $x_1+x_2+x_3+x_4+x_5+x_6=15, x_i \geq 1$ which is the same as $y_1+y_2+y_3+y_4+y_5+y_6=9, y_i \geq 0$ which has $ 14 \choose 9$ solutions. So that is 2002 solutions. And there are in total $22 \choose 15$ solutions to the equation without the restriction. So in a percentage, there is a $\frac{2002}{15504}$ or 12.9% we will get one of each kind.</p> <p>For the second problem, I used the equation $x_1+x_2+x_3+x_4+x_5+x_6=15, x_1 \geq 3, x_i \geq 0, i \neq 1$. This gives $17 \choose 12$ solutions, which gives a $\frac{6188}{15504}$ or a 39% chance of getting a sesame bagel. </p> <p>Is my approach for both of these right? (The percentages of these happening seem really high)</p>
G Ghost
641,931
<p>Your answers are both correct, you just have one tiny little error which I'm guessing is a typo. You said the total number of solutions to the equation <span class="math-container">$x_1 + x_2 + x_3 + x_4 + x_5 + x_6 = 15, x_i \geq 0$</span> is <span class="math-container">${22 \choose 5}$</span> . But if we solve for the number of integral solutions of the above equation, we get <span class="math-container">${15 + 6 - 1 \choose 15} = {20 \choose 15} = 15504$</span> . </p> <p>Other than that, your answers are correct. </p>
264,572
<p>I am using <code>Table</code> to plot the time steps of a function. However, since there are a lot of decimal places in the x-axis, they are all cramped up : <a href="https://i.stack.imgur.com/SCSbK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SCSbK.png" alt="enter image description here" /></a></p> <p>I would like to multiply the numbers in the x-axis by <span class="math-container">$10^4$</span> so that the decimals drop out. How do I do this?</p>
Shin Kim
85,037
<p>Those are called <a href="https://reference.wolfram.com/language/ref/Ticks.html" rel="nofollow noreferrer"><code>Ticks</code></a>. You could try adding the following option in your <code>Plot</code>:</p> <pre><code>Ticks -&gt; {Table[{i/10^4, i}, {i, 0, 15}], Automatic} </code></pre>
1,021,753
<p>Any idea on how to compute the expected value of product of Ito's Integral with two different upper limit?</p> <p>For example: $$\mathbb{E}\left[\int_0^r f(t)\,dB(t) \int_0^s f(t)\,dB(t)\right]$$</p> <p>I only know how to compute when the upper limit r and s are the same...but don't know how when r and s are different...help. </p>
pbierre
259,171
<p>If you're familiar with [ Euler axis , Euler angle ] representation of a generalized 3D rotation, it is fairly easy to prove your conjecture.</p> <p>Axis 1 rotation. Project the Euler axis down onto the x-y plane. Begin by rotating around this axis by the Euler angle.</p> <p>Axis 2 rotation. Calculate the elevation angle of the Euler axis. Construct an axis perpendicular to Axis 1 lying in the x-y plane. Rotate about Axis 2 by the elevation angle of the Euler axis.</p> <p>Exception: If the Euler axis is the +z-axis or -z-axis, discard the choice of the x-y plane for your rotation axes....choose the x-z plane, and carry out the Euler angle rotation about the z-axis.</p> <hr> <p>A note of caution in wording your conjecture. There is no single plane that contains the 2-axis, 2-angle set covering ALL possible 3D rotations. As noted in my exception, there is one Euler axis (the one orthogonal to the plane of choice) that cannot be emulated by 2 rotate axes in that same plane). So, you want to word it so that you're allowed more than one plane of choice to span all possible 3D rotations.</p>
524,870
<p>This is adapted from 1.7.7 in Friedman's "Foundations of Modern Analysis":</p> <blockquote> <p>Let $\mathscr{B}$ be the $\sigma$-ring generated by the class of open subsets of $X$ [a fixed set], and $\mathscr{D}$ the $\sigma$-ring generated by the class of closed subsets of $X$. Show that $\mathscr{D} = \mathscr{B}$.</p> </blockquote> <p>I would appreciate a hint on how to begin doing this exercise, because I haven't a clue. I'm not really sure what the problem entails. I know what a $\sigma$-ring is, as well as open and closed sets. That $\mathscr{B},\mathscr{D}$ are generated means that they each are, in a sense, the smallest and unique ring containing its respective "underlying" class of sets. But none of this gives me any idea on where to start. The problem is from a section on metric spaces (prior to metric outer measures), but again not even the context gives me any ideas.</p> <p>(Couldn't come up with a good title, change it if you like...)</p>
Dylan Zhu
63,526
<p>you can try to show that for a set $b \in \mathcal B $, b also belongs to $\mathcal D$</p>
200,931
<p>I want to generate a layered drawing of the <a href="https://en.wikipedia.org/wiki/Hoffman%E2%80%93Singleton_graph" rel="noreferrer">Hoffman–Singlelton graph</a>. As an example of what I want, here is a layered drawing of the Petersen graph:</p> <p><a href="https://i.stack.imgur.com/doEt5.png" rel="noreferrer"><img src="https://i.stack.imgur.com/doEt5.png" alt="enter image description here"></a></p> <p>Now if I right click on the output of <code>PetersenGraph[]</code> and do Graph Layout -> Layered, drawing, I get this:</p> <p><a href="https://i.stack.imgur.com/B4xUM.png" rel="noreferrer"><img src="https://i.stack.imgur.com/B4xUM.png" alt="enter image description here"></a></p> <p>Clearly a lot of the important visual information at the end layer is lost because the edges all overlap. Is there a way to recreate something similar to the the top image, where the edges at the last layer are visible?</p> <p>My actual goal is not to do this with the Petersen, but with the Hoffman–Singleton (in Mathematica, <code>FromEntity[Entity["Graph", "HoffmanSingletonGraph"]]</code>). Needless to say, I got a similar output for this graph:</p> <p><a href="https://i.stack.imgur.com/imLVp.png" rel="noreferrer"><img src="https://i.stack.imgur.com/imLVp.png" alt="enter image description here"></a></p> <p>I appreciate any assistance with this.</p>
halmir
590
<p>Here's the one way by using the custom edge function:</p> <pre><code>arcRight[{a:{x1_,y1_},___,b:{x2_,y2_}}]/;y1&gt;y2:=arcRight[{b, a}]; arcRight[{a:{x1_,y1_},___,b:{x2_,y2_}}]/;y1&lt;=y2:=BSplineCurve[{a, {x1 + (y2-y1).7, (y1+y2)/2},b}] iLayeredDrawing[g_, spos_Integer:1, opt___?OptionQ] := Module[{s, vlist, leaves}, vlist = VertexList[g]; s = vlist[[spos]]; leaves = MaximalBy[Reap[BreadthFirstScan[g, s, "DiscoverVertex"-&gt;(Sow[{#1,#3}]&amp;)]][[2,1]], Last][[All,1]]; Graph[vlist, EdgeList[g], opt, GraphLayout-&gt;{"LayeredEmbedding", "Orientation"-&gt;Left, "LeafDistance"-&gt;1/(Length[leaves]/2), "RootVertex" -&gt; s}, EdgeShapeFunction-&gt;{a_\[UndirectedEdge]b_/;SubsetQ[leaves,{a,b}]:&gt;(arcRight[#1]&amp;)}] ] </code></pre> <p>For example,</p> <pre><code>iLayeredDrawing[PetersenGraph[], EdgeStyle -&gt; Black, VertexStyle -&gt; Directive[White, EdgeForm[Black]], VertexSize -&gt; .3] </code></pre> <p><a href="https://i.stack.imgur.com/poc9o.png" rel="noreferrer"><img src="https://i.stack.imgur.com/poc9o.png" alt="enter image description here"></a></p> <pre><code>iLayeredDrawing[FromEntity[Entity["Graph", "HoffmanSingletonGraph"]], EdgeStyle -&gt; Black, VertexStyle -&gt; Directive[White, EdgeForm[Black]], VertexSize -&gt; .6, ImageSize -&gt; 600] </code></pre> <p><a href="https://i.stack.imgur.com/jsq8s.png" rel="noreferrer"><img src="https://i.stack.imgur.com/jsq8s.png" alt="enter image description here"></a></p>
501,660
<p>In school, we just started learning about trigonometry, and I was wondering: is there a way to find the sine, cosine, tangent, cosecant, secant, and cotangent of a single angle without using a calculator?</p> <p>Sometimes I don't feel right when I can't do things out myself and let a machine do it when I can't.</p> <p>Or, if you could redirect me to a place that explains how to do it, please do so.</p> <p>My dad said there isn't, but I just had to make sure.</p> <p>Thanks.</p>
Mark Bennet
2,906
<p>The <a href="http://en.wikipedia.org/wiki/Trigonometric_functions" rel="nofollow">wikipedia article</a> gives some infinite series, which are probably what your calculator uses. The formulae for sine and cosine are the ones to focus on first. They converge very quickly, but you have to realise that the angles are measured in radians, where $2\pi$ radians $=360^{\circ}$. If you do the conversion, you'll be able to calculate quite quickly for yourself.</p> <p>There are connections to a lot of beautiful and clever maths to be discovered, which explain why all this works. You have asked a great question. Keep going with the answer - there are more dimensions to it than you will see on the surface.</p>
351,642
<p>So I'm proving that a group $G$ with order $112=2^4 \cdot 7$ is not simple. And I'm trying to do this in extreme detail :) </p> <p>So, assume simple and reach contradiction. I've reached the point where I can conclude that $n_7=8$ and $n_2=7$. </p> <p>I let $P, Q\in \mathrm{Syl}_2(G)$ and now dealing with cases that $|P\cap Q|=1, 2^2, 2^3$ or $2^4$. </p> <p>I easily find contradiction when $|P\cap Q|=2^4$ and $2$. </p> <p>Um, got stuck REAL bad on the case $|P\cap Q|=2^3$ and $2^2$. </p> <p>If $|P \cap Q |=2^3= 8$ and $|P|=|Q|=16$, is there any relationship between $P,Q$ and their intersection that can help me? </p>
Dean Gurvitz
283,215
<p>I wanted to write out Mikko Korhonen's first idea for proof in detail as a separate answer, since it is not trivial at all, and provoked some questions in the comments.</p> <p>As mentioned in the original question, we assumed $n_2=7$. From Sylow's second theorem, we know that all the 2-Sylow subgroups are conjugate, and we can look at $G$'s action on them by conjugation. This action implies a homomorphism: $$f:G\rightarrow S_7$$ $\ker(f)$ cannot be $G$ since the action is a transitive action. Then, if $\ker(f)$ is non-trivial (meaning $ker(f)\neq\{e\}$) then it is a non-trivial normal subgroup of $G$, and therefore $G$ isn't simple.</p> <p>Otherwise, we get $ker(f)=\{e\}$ and therefore $f$ is injective, meaning that $G$ is isomorphic to a subgroup of $S_7$. For convenience purposes, we'll write $G\leq S_7$. $G$ cannot be contained in $A_7$ because 112 doesn't divide $|A_7|$. In that case $GA_7=S_7$ and using the second isomorphism theorem we get: $$G/(G\cap A_7)\cong GA_7/A_7\cong S_7/A_7\cong\mathbb{Z}_2$$ and therefore $[G:(G\cap A_7)]=2$ meaning that $G\cap A_7$ is a normal subgroup of $G$.</p>
4,579,084
<p>It was a new contributor's question. I answered, got my -1 again and then deleted. Then I asked myself. Then gave it up again. Actually I was gonna ask a different question NOW. When I pressed ask a question, to my surprise, the question I intended to ask yesterday was in the memory!</p> <p>I wanted to evaluate the following limit by logarithmic limit rule: <span class="math-container">$$\lim_{n\rightarrow\infty} \left(\frac{n^{n-1}}{(n-1)!}\right)^{\frac{1}{n}}=\exp\left(\lim_{n\rightarrow\infty}\frac{(n-1)\ln n-\ln (n-1)!}{n}\right)=\exp\left(\lim_{n\rightarrow\infty}-\frac{1}{n}\sum_{k=1}^n\ln(\frac{k}{n})\right)$$</span> Then I observed a Riemann sum of an indefinite integral inside so that the limit is <span class="math-container">$$\exp\left(-\int_0^1\ln xdx\right)=\exp\left((x-x\ln x\vert_0^1)\right)=e.$$</span> Is my solution correct? Can you suggest another way? Stirling's approximation formula is excluded.</p>
Brian M. Scott
12,042
<p>Let <span class="math-container">$\sigma\in\Bbb S_n$</span>, and suppose that <span class="math-container">$\sigma$</span> is <span class="math-container">$i$</span>-orderly and <span class="math-container">$j$</span>-orderly for some <span class="math-container">$i,j\in[m]$</span> such that <span class="math-container">$i\ne j$</span>. By hypothesis there are <span class="math-container">$x\in A_i\cap B_j$</span> and <span class="math-container">$y\in A_j\cap B_i$</span>. Then on the one hand <span class="math-container">$\sigma(x)&lt;\sigma(y)$</span>, since <span class="math-container">$\sigma$</span> is <span class="math-container">$i$</span>-orderly, but on the other hand <span class="math-container">$\sigma(y)&lt;\sigma(x)$</span>, since <span class="math-container">$\sigma$</span> is <span class="math-container">$j$</span>-orderly. This is clearly impossible, so <span class="math-container">$|\{i\in [m]:\sigma\text{ is }i\text{-orderly}\}|\le 1$</span>.</p> <p>Now let <span class="math-container">$i\in[m]$</span>; <span class="math-container">$\sigma\in\Bbb S_n$</span> is <span class="math-container">$i$</span>-orderly if and only if <span class="math-container">$\max\sigma[A_i]&lt;\min\sigma[B_i]$</span>. We can construct such a permutation of <span class="math-container">$[n]$</span> as follows. First, <span class="math-container">$\sigma[A_i]\cup\sigma[B_i]$</span> is a <span class="math-container">$(k+\ell)$</span>-element subset of <span class="math-container">$[n]$</span>, and there are <span class="math-container">$\binom{n}{k+\ell}$</span> of those. Once we’ve chosen one of those subsets to be <span class="math-container">$\sigma[A_i]\cup\sigma[B_i]$</span>, the smallest <span class="math-container">$k$</span> members must be <span class="math-container">$\sigma[A_i]$</span>, and the remaining <span class="math-container">$\ell$</span> members must be <span class="math-container">$\sigma[B_i]$</span>. The members of <span class="math-container">$\sigma[A_i]$</span> can be permuted in any of <span class="math-container">$k!$</span> ways, and those of <span class="math-container">$\sigma[B_i]$</span> can independently be permuted in any of <span class="math-container">$\ell!$</span> ways. Thus, there are <span class="math-container">$k!\ell!\binom{n}{k+\ell}$</span> ways to choose <span class="math-container">$\sigma\upharpoonright(A_i\cup B_i)$</span></p> <p>Finally, <span class="math-container">$\sigma$</span> must send <span class="math-container">$[n]\setminus(A_i\cup B_i)$</span> bijectively to <span class="math-container">$[n]\setminus\sigma[A_i\cup B_i]$</span>, and it can do so in <span class="math-container">$\big(n-(k+\ell)\big)!$</span> different ways, so there are altogether</p> <p><span class="math-container">$$k!\ell!(n-k-\ell)!\binom{n}{k+\ell}=\frac{n!k!\ell!}{(k+\ell)!}$$</span></p> <p>possibilities for <span class="math-container">$\sigma$</span>. That is,</p> <p><span class="math-container">$$|\{\sigma\in\Bbb S_n:\sigma\text{ is }i\text{-orderly}\}|=\frac{n!k!\ell!}{(k+\ell)!}$$</span></p> <p>for each <span class="math-container">$i\in[m]$</span>.</p>
1,873,370
<p>I am trying to understand a particular coset/double coset of the finite group $G = GL(n, q^2) = GL_n(\mathbb{F}_{q^2})$. It has a natural subgroup $H = GL(n, q)$, which can also be viewed in the following way: consider an automorphism of raising each entry to the $q$-th power, (taking $n = 2$ as an example)</p> <p>$$ \varphi\begin{bmatrix}a &amp; b \\ c &amp; d\end{bmatrix} = \begin{bmatrix}a^q &amp; b^q \\ c^q &amp; d^q\end{bmatrix}, $$</p> <p>clearly $\varphi^2 = Id$, and $H = G^{\varphi}$ as the fixed points of this morphism.</p> <p>Question: what is a good way to view the coset $G/H$ and double coset $H \backslash G/H$, and any good way of writing the representatives?</p>
Qiaochu Yuan
232
<p>In general, let $K \to L$ be a field extension. $GL_n(L)/GL_n(K)$ can be interpreted as the set of "$K$-structures" on $L^n$. One of many equivalent ways to describe a $K$-structure is that it is a $K$-subspace $V$ of $L^n$ such that the induced map</p> <p>$$V \otimes_K L \to L^n$$</p> <p>is an isomorphism. Consequently, $GL_n(K) \backslash GL_n(L) / GL_n(K)$ can be described as the set of "relative positions" of two $K$-structures, in the sense of e.g. <a href="https://qchu.wordpress.com/2015/11/06/double-cosets-are-relative-positions/" rel="nofollow">this blog post</a>. </p> <p><strong>Edit, 7/28/16:</strong> In turn you can get some handle on relative positions by thinking about functions or properties of a pair $V, W$ of $K$-structures that are invariant under ($L$-linear) change of coordinates. A simple example is $\dim_K V \cap W$. More generally, for any $\ell \in L^{\times}$, you can look at $\dim_K \ell V \cap W$. </p>
1,869,564
<p>i tried to derive logistic population model, and need to integrate this $\int \frac{\frac{1}{k}}{1-\frac{N_t}{k}} dN_t$. here is my solution</p> <p>$\int \frac{\frac{1}{k}}{1-\frac{N_t}{k}} dN_t=\int \frac{1}{k-N_t} dN_t=-\int \frac{1}{k-N_t}d{(k-N_t)}=-\ln\mid k-N_t\mid+C_1$. i think i have done something wrong here, because if i solve it this ways $\int \frac{\frac{1}{k}}{1-\frac{N_t}{k}} dN_t=-\int \frac{1}{1-\frac{N_t}{k}} d(1-\frac{N_t}{k})=-\ln \mid 1-\frac{Nt}{k} \mid +C_2$ which is obviously different from the previous solution, so where is the mistake(s) ?</p>
smcc
354,034
<p>Any function of the form $f(x)=g(|x|)$ where $g$ is an increasing concave function with $g(0)=0$ and $\lim_{x\to\infty}g(x)=a$ will work. </p> <p>Letting, $g(x)=a[1-h(x)]$ we need $h$ convex with $h(0)=1$ and $\lim_{x\to\infty}h(x)=0$. </p> <p>For example, you could take $h(x)=\frac{1}{1+x}$, so that $$f(x)=a-\frac{a}{1+|x|}.$$</p>
2,312,968
<p>If $t=\ln(x)$, $y$ some function of $x$, and $\dfrac{dy}{dx}=e^{-t}\dfrac{dy}{dt}$, why would the second derivative of $y$ with respect to $x$ be: $$-e^{-t}\frac{dt}{dx}\frac {dy}{dt} + e^{-t}\frac{d^2y}{dt^2}\frac{dt}{dx}?$$</p> <p>I know this links into the chain rule. I don't have a good intuition for why the first term has $\dfrac{dt}{dx}\dfrac{dy}{dt}$ (although I strongly feel it's such that we can change the variable, since this question arose in the context of a second order differential equation where $y$ was differentiated in terms of $x$, but the equation was non linear, so we had to make it linear by substitution). Moreover, the proper problem that I would plead to be adressed is why the second term is differentiated in the way that it is. Basically, my question is: why is the differential of $\dfrac{dt}{dx}\dfrac{dy}{dt}$ with respect to $x$ given as $\dfrac{d^2y}{dt^2}\dfrac{dt}{dx}$. </p> <p>Preferable if english to explain any mathematical derivations, but any of your personal time to help out is always much appreciated.</p>
Saketh Malyala
250,220
<p>$\sin(2π-k)+c=0$</p> <p>This is equivalent to $\sin(k)=c$</p> <p>$\sin(\frac{4π}{3}-k)+c=0$</p> <p>$-\frac{\sqrt{3}}{2}\cos(k)+\frac{1}{2}\sin(k)+\sin(k)=0$</p> <p>$-\sqrt{3}\cos(k)=-3\sin(k)$</p> <p>$\tan(k)=\frac{\sqrt{3}}{3}$</p> <p>$k=\frac{π}{6}$</p> <p>$c=\sin(\frac{π}{6})=\frac{1}{2}$</p>
2,296,724
<p>I need to calculate $(A+B)^{-1}$, where $A$ and $B$ are two square, very sparse and very large. $A$ is block diagonal, real symmetric and positive definite, and I have access to $A^{-1}$ (which in this case is also sparse, and block diagonal). $B$ is diagonal and real positive. In my application, I need to calculate the inverse of the sum of these two matrices where the inverse of the non-diagonal one (e.g. $A^{-1}$) is updated frequently, and readily available.</p> <p>Since $B$ is full rank, the Woodbury lemma is of no use here (well, it is, but it's too slow). Other methods described in <a href="https://math.stackexchange.com/questions/17776/inverse-of-the-sum-of-matrices">this nice question</a> are of no use in my case as the spectral radius of $A^{-1}B$ is much larger than one. Methods based on a diagonalisation assume that it is the diagonal matrix that is being updated frequently, whereas that's not my case (i.e., diagonalising $A$ is expensive, and I'd have to do that very often).</p> <p>I'm quite happy to live with an approximate solution.</p>
theo
718,585
<p>Unless I am seriously mistaken, the above answer is incorrect. In my opinion, the problem lies with <span class="math-container">$$ A_i+diag(a_{i,1},\cdots,a_{i,n_i})=P_i\begin{bmatrix} \lambda_{i,1}+a_{n_i} &amp; 0 &amp; 0 &amp; \cdots &amp; 0 &amp; 0 \\ 0 &amp; \ddots &amp; \ddots &amp; \ddots &amp; &amp; 0 \\\ 0 &amp; \ddots &amp; \ddots &amp; \ddots &amp; \ddots &amp; \vdots \\ \vdots &amp; \ddots &amp; \ddots &amp; \ddots &amp; \ddots &amp; 0 \\ 0 &amp; &amp; \ddots &amp; \ddots &amp; \ddots &amp; 0 \\ 0 &amp; 0 &amp; \cdots &amp; 0 &amp; 0 &amp; \lambda_{i,n_{i}}+a_{i,n_i} \end{bmatrix}{P_i}^{T} $$</span></p> <p>Let's consider <span class="math-container">$$ P=\frac{1}{\sqrt{5}}\left[\begin{matrix} 2 &amp; -1 \\ 1 &amp; 2\end{matrix}\right] $$</span> <span class="math-container">$$ D=\left[\begin{matrix} 2 &amp; 0 \\ 0 &amp; 1\end{matrix}\right] $$</span> <span class="math-container">$$ B= \left[\begin{matrix} 2 &amp; 0 \\ 0 &amp; 1\end{matrix}\right] $$</span></p> <p>Quick calculations show <span class="math-container">$$ A + B = PAP^T + B = \left[\begin{matrix} 3.8 &amp; 0.4 \\ 0.4 &amp; 2.2\end{matrix}\right], $$</span> but <span class="math-container">$$ P ( D + B) P' = \left[\begin{matrix} 3.6 &amp; 0.8 \\ 0.8 &amp; 2.4\end{matrix}\right]. $$</span></p> <p>The problem lies in the fact that adding an element to the diagonal of an un-transformed matrix A is not the same as adding it to the diagonal of the diagonalization matrix, i.e. <span class="math-container">$$P A P^T + B \neq P (A + B) P^T$$</span> in general.</p> <p>Or am I missing something?</p> <p>PS: Unfortunately, I do not have enough reputation to comment. Sorry for the second answer, please feel free to add this to the original.</p>
1,187,376
<p>Let $c(n,k)$ be the unsigned Stirling numbers of the first kind, i.e., the number of $n$-permutations with exactly $k$ cycles. Apparently, $$\sum_{k=1}^n c(n,k)2^k = (n+1)!$$</p> <p>I want to prove the equality. </p> <p>I am most interested in a combinatorial explanation. </p> <p>The exponential generating function for the RHS is $\frac1{(1-x)^2}$. Is there a way to derive the e.g.f. for the LHS symbolically? </p>
Qiaochu Yuan
232
<p>The more general result is that</p> <p>$$\sum_{k=1}^n c(n, k) x^k = x(x + 1) \dots (x + n - 1).$$</p> <p>Your result follows straightforwardly by substituting $x = 2$. This identity has the following cute proof: dividing both sides by $n!$ we get</p> <p>$${x + n - 1 \choose n} = \frac{1}{n!} \sum_{k=1}^n c(n, k) x^k.$$</p> <p>The LHS is the number of multisets of size $n$ on a set of size $x$. This is the number of orbits of the action of the symmetric group $S_n$ on the set of functions from a set $[n]$ of size $n$ to a set $[x]$ of size $x$, and accordingly the number of orbits can be counted using <a href="http://en.wikipedia.org/wiki/Burnside%27s_lemma">Burnside's lemma</a>. If $\sigma$ is a permutation with $k$ cycles, then the number of fixed points of $\sigma$ acting on functions $[n] \to [x]$ is $x^k$, and the conclusion follows.</p> <p>You can also prove this identity by induction on $n$. </p> <p>For more general results along these lines see <a href="https://qchu.wordpress.com/2009/06/21/gila-v-the-polya-enumeration-theorem-and-applications/">this blog post</a> and <a href="https://qchu.wordpress.com/2009/06/24/gila-vi-the-cycle-index-polynomials-of-the-symmetric-groups/">this one</a>. </p>
1,187,376
<p>Let $c(n,k)$ be the unsigned Stirling numbers of the first kind, i.e., the number of $n$-permutations with exactly $k$ cycles. Apparently, $$\sum_{k=1}^n c(n,k)2^k = (n+1)!$$</p> <p>I want to prove the equality. </p> <p>I am most interested in a combinatorial explanation. </p> <p>The exponential generating function for the RHS is $\frac1{(1-x)^2}$. Is there a way to derive the e.g.f. for the LHS symbolically? </p>
Brian M. Scott
12,042
<p>Here is a tedious but extremely elementary combinatorial argument for the more general result.</p> <p>Let $\pi$ be a permutation of $[n]$ having $k$ cycles. The standard representation of $\pi$ is </p> <p>$$(a_{11}a_{12}\ldots a_{1m_1})(a_{21}a_{22}\ldots a_{2m_2})\ldots(a_{k1}a_{k2}\ldots a_{km_k})\;,\tag{1}$$ </p> <p>where $a_{i1}&gt;a_{ij}$ for $i=1,\ldots k$ and $j=2\ldots,m_i$, and $a_{11}&lt;a_{21}&lt;\ldots a_{k1}$. In other words, each cycle is listed with its largest element first, and the cycles are listed in increasing order of their largest elements. One-element cycles are not suppressed. The map that sends $\pi$ to the permutation whose one-line representation is $(1)$ without the parentheses is a bijection.</p> <p>Now we’ll count the permutations $\pi$ with $k$ cycles as in $(1)$. Clearly $a_{k1}=n$, but we have a free choice for $a_{11},a_{21},\ldots,a_{k-1,1}$. Once they’re chosen, in how many ways can we fill out $(1)$? We can ignore the parentheses, and of course we have to put $a_{11}$ in the first slot. We now place the $a_{11}-1$ elements of $[n]$ that are less than $a_{11}$; they can go anywhere in $(1)$ after $a_{11}$ so they can be placed in $(n-1)(n-2)\ldots(n-a_{11}+1)$ different ways. Once they’ve been placed, $a_{21}$ must go in the first free slot. That leaves $n-a_{11}-1$ open slots in $(1)$, and the $a_{21}-a_{11}-1$ elements of $[n]$ that are larger than $a_{11}$ and smaller than $a_{21}$ can go in any of them. Thus, they can be placed in </p> <p>$$(n-a_{11}-1)(n-a_{11}-2)\ldots(n-a_{21}+1)$$</p> <p>different ways, after which $a_{31}$ must be placed in the first available slot.</p> <p>In general, once we’ve made the forced placement of $a_{i1}$, the $a_{i1}-a_{i-1,1}-1$ elements of $[n]$ lying strictly between $a_{i-1,1}$ and $a_{i1}$ can be placed in</p> <p>$$(n-a_{i-1,1}-1)(n-a_{i-1,1}-2)\ldots(n-a_{i1}+1)$$</p> <p>different ways. Thus, if we set $a_{01}=0$, the number of permutations with these values of $a_{11},\ldots,a_{k1}$ is</p> <p>$$\prod_{i=0}^{k-1}(n-a_{i-1,1}-1)(n-a_{i-1,1}-2)\ldots(n-a_{i1}+1)=\frac{(n-1)!}{\prod_{i=1}^{k-1}(n-a_{k1})}\;.\tag{2}$$ </p> <p>As we run over all possible choices for the leading elements $a_{11},\ldots,a_{k1}$, the denominator in $(2)$ runs over all $(k-1)$-fold products of distinct elements of $[n-1]$, so $(2)$ itself runs over all $(n-k)$-fold products of distinct elements of $[n-1]$, and $n\brack k$ is therefore the sum of all such products.</p> <p>However, that sum is clearly also the coefficient of $x^k$ in the product</p> <p>$$x^{\overline n}=\prod_{i=0}^{n-1}(x+i)=x(x+1)(x+2)\ldots(x+n-1)\;,$$</p> <p>so in general we have the identity</p> <p>$$x^{\overline n}=\sum_{k=0}^n{n\brack k}x^k\;,$$</p> <p>and the desired identity follows by setting $x=2$.</p>
33,622
<p>I am looking for differentiable functions $f$ from the unit interval to itself that satisfy the following equation $\forall\:p \in \left( 0,1 \right)$:</p> <p>$$1-p-f(f(p))-f(p)f'(f(p))=0$$</p> <p>Is there a way to use <em>Mathematica</em> to solve such equations?<br> <code>DSolve</code> is of course unable to handle this -- unless there are tricks I don't know about.</p>
akater
1,859
<p>At least you can eliminate $p$:</p> <pre><code>continue[{exprs__}, f_] := Append[#, Switch[Head@f , Rule | RuleDelayed, # /. f &amp; , _, f]@Last@#]&amp; @ {exprs} toRHS[term_][lhs_ == rhs_] := lhs - term == rhs - term multiplyBothSidesBy[x_]@e_Equal := # x &amp; /@ e privateDRule = f_'[x_] :&gt; d[f@x]/d[x]; integrateLocally = u_ d[v_] :&gt; d[u v] - v d[u]; integrateGlobally = u_ d[v_] + v_ d[u_] :&gt; d[u v]; outMinus = d[-x_] :&gt; -d[x]; </code></pre> <p>Check the result:</p> <pre><code>Column[#, Spacings -&gt; 1]&amp; @ Fold[continue, {1 - p - f[f[p]] - f[p] f'[f[p]] == 0}, { f[p] -&gt; u, toRHS[1 - p], multiplyBothSidesBy[d@u], privateDRule, Expand, MapAt[# /. integrateLocally &amp;, #, 1] &amp;, outMinus, integrateGlobally, Composition[Simplify /@ # &amp;, multiplyBothSidesBy[-(d@u)^-1]], p -&gt; InverseFunction[f][u]}] // TraditionalForm </code></pre> <p>$\frac{d(u f(u))}{du}=1-f^{-1}(u)$</p> <p>I give this “answer” primarily because I rarely see people using Mathematica this way — and your nasty nested (no pun intended) expression allows a clear demonstration of this use of Mma. The language is good for explaining the (possibly incomplete) idea, and for immediate check if the idea works, in the same time.</p>