qid
int64 1
4.65M
| question
large_stringlengths 27
36.3k
| author
large_stringlengths 3
36
| author_id
int64 -1
1.16M
| answer
large_stringlengths 18
63k
|
---|---|---|---|---|
813,395 |
<p>I have read that linear independence occurs when:</p>
<p>$$\sum_{i=1}^n a_i v_i =0$$</p>
<p>Has only $a_i=0$ as a solution, but what if all $v_i$ were $0$ then $a_i$ could vary and still yield $0$. Does that mean that such a vector set is not linearly independent?</p>
<p>What if I have:</p>
<p>Let $\{c_0,c_1,c_2,\dots,c_n\}$ denote a set of $n+1$ distinct elements in $\mathbb{R}$. Define the set of $n+1$ polynomials.</p>
<p>$$f_j(x)=\prod_{k=0,k\ne j}^n \frac{x-c_k}{c_j - c_k} $$</p>
<p>Note that $f_j(x) \in P_n(\mathbb{R})$ with the property
$$f_j(c_l) = \left\{ \begin{align} 0&& \text{if}&& j\ne l\\ 1&& \text{if}&& j= l \end{align} \right.$$</p>
<p>And $\alpha = \{f_0(x),f_1(x),\dots,f_n(x)\}$, then this is or isn't linearly independent based on my $x$ value. Is there something here that forces $x$ to equal one of my $c_j$? For I am told that this $\alpha$ is linearly independent.</p>
|
5xum
| 112,884 |
<p>If there exists such an $i$ for which $v_i=0$, then selecting $\alpha_i=1$ and $\alpha_j=0$ for $j\neq i$ means that $$\sum_{k=1}^na_kv_k=a_iv_i=0.$$</p>
<p>This means that if at least one vector in the set is $0$, the set is not linearly independent.</p>
|
4,016,133 |
<p>I am recently exploring set-theoretical postulates that contradict GCH. One particularly interesting one is the proposition "<span class="math-container">$2^\kappa$</span> is singular for each infinite cardinal <span class="math-container">$\kappa$</span>". However I could not prove that this proposition is at all consistent with ZFC.</p>
<p>It seems that a slightly weakened form "<span class="math-container">$2^\kappa$</span> is singular for each infinite <em>regular</em> cardinal <span class="math-container">$\kappa$</span>" is indeed consistent with ZFC, since the function <span class="math-container">$\kappa\mapsto\aleph_{\kappa^+}$</span> satisfies the conditions of Easton's theorem and there therefore exists a model where <span class="math-container">$2^\kappa=\aleph_{\kappa^+}$</span> for each regular <span class="math-container">$\kappa$</span>. Can this result also be extended to all singular cardinals?</p>
|
XiaohuWang
| 809,229 |
<p>As it turns out, my question has already been answered in <a href="https://mathoverflow.net/questions/226887/when-can-power-sets-be-limit-cardinals">this post</a>, where other related topics have also been discussed.</p>
|
152,467 |
<p>Can you please explain to me how to get from a nonparametric equation of a plane like this:</p>
<p>$$ x_1−2x_2+3x_3=6$$</p>
<p>to a parametric one. In this case the result is supposed to be </p>
<p>$$ x_1 = 6-6t-6s$$
$$ x_2 = -3t$$
$$ x_3 = 2s$$</p>
<p>Many thanks.</p>
|
Karolis Juodelė
| 30,701 |
<p>There is more than one way to write any plane is a parametric way. To write a plane in this way, pick any three points $A$, $B$, $C$ on that plane, not all in one line. Then
$$f(s, t) = A + (B-A)s + (C-A)t$$</p>
|
1,012,158 |
<p>$$ y\in R$$
Prove: <br>
if for every positive number $b$:
$$ \left\lvert y \right\rvert \leq b $$
so $y=0$</p>
<p>I tried seperating into cases where</p>
<p>$$ -b\leq y\leq 0 $$ and $$ 0\leq y\leq b $$</p>
<p>But I can't see where it helps me, any ideas? thanks</p>
|
Community
| -1 |
<p>By the hypothesis we have</p>
<p>$$\forall b>0,\quad |y|\le b$$
which means that $|y|$ is a lower bound for the set $\Bbb R_{>0}$ so $|y|\le 0 $ : the greatest lower bound of this set hence we get $$0\le |y|\le 0\implies |y|=0\implies y=0$$</p>
|
1,012,158 |
<p>$$ y\in R$$
Prove: <br>
if for every positive number $b$:
$$ \left\lvert y \right\rvert \leq b $$
so $y=0$</p>
<p>I tried seperating into cases where</p>
<p>$$ -b\leq y\leq 0 $$ and $$ 0\leq y\leq b $$</p>
<p>But I can't see where it helps me, any ideas? thanks</p>
|
idm
| 167,226 |
<p>Suppose that $y\neq 0$, then, there exist a $c>0$ such that
$$|y|\geq c>0$$
which is a contradiction with
$$\forall b> 0, |y|\leq b,$$</p>
<p>Q.E.D.</p>
|
25,414 |
<p>I'm running in to some problems with generating a persistent HSQLDB and during some troubleshooting I came upon the following behavior.</p>
<pre><code>Needs["DatabaseLink`"]
tc = OpenSQLConnection[
JDBC["hsqldb", ToFileName[Directory[], "temp"]], Username -> "sa"]
CloseSQLConnection[tc]
</code></pre>
<p>The above code generates three files (which will be located in <code>Directory[]</code>). Despite the fact that the connection is closed, two of the files (temp.lck and temp.log) cannot be deleted until the Mathematica kernel has been shut down. Is this 'normal' behavior? </p>
|
Fred Daniel Kline
| 973 |
<p><a href="http://www.fileinfo.com/extension/lck" rel="nofollow">The last paragraph here </a>indicates that the <strong>.lck</strong> extension is assumed by the OS to be managed by the application (which in this case is <em>Mathematica</em>). The <strong>.log</strong> is also in that category. The solution is to specifically delete those files after </p>
<blockquote>
<p>CloseSQLConnection[tc]</p>
</blockquote>
<p>I use the CUD pattern when programming: Create, Use, and Dispose. Mainly for memory allocations and temporary files. </p>
|
3,805,286 |
<p>This is a question on the convergence of a sequence of real, convex, analytic functions (it does not get better than that!):</p>
<p>Let <span class="math-container">$(f_n)_{n\in \mathbb N}$</span> be a sequence of convex analytic functions on <span class="math-container">$\mathbb R$</span>.</p>
<p>Suppose that <span class="math-container">$f_n(x) \to f(x)$</span> as <span class="math-container">$n \to \infty$</span> for all <span class="math-container">$x \in \mathbb R$</span> (or in <span class="math-container">$\mathbb R^+$</span>).</p>
<p>Is <span class="math-container">$f(x)$</span> analytic?</p>
|
zhw.
| 228,045 |
<p>Counterexample: Define <span class="math-container">$f_n(x) = (x^2+1/n)^{1/2}.$</span> Then each <span class="math-container">$f_n$</span> is analytic and convex on <span class="math-container">$\mathbb R.$</span> Clearly <span class="math-container">$f_n(x)\to |x|$</span> pointwise everywhere. (A little more work shows <span class="math-container">$f_n(x)\to |x|$</span> uniformly on <span class="math-container">$\mathbb R.$</span>)</p>
|
2,043,457 |
<p>Does anybody know of a succint way to compute the residue of $f(z)=z^m/(1-e^{-z})^{n+1}$ at $z=0$? I am only interested in the nontrivial case $m<n$.
Induction seems complicated/inefficient, so I am looking for a "trick", perhaps with Lagrange inversion?</p>
|
Martín-Blas Pérez Pinilla
| 98,199 |
<p>In a $\Bbb R$-vector space, the scalar product $\lambda v$ with $\lambda\in \Bbb C$ <strong>isn't defined</strong>. You can <em>define</em> a <a href="https://en.wikipedia.org/wiki/Complexification" rel="nofollow noreferrer">complexification</a> of a $\Bbb R$-vector space, but it will be <em>another</em> structure.</p>
|
4,154,025 |
<p>In a set of lecture notes, I have the following result:</p>
<blockquote>
<p><strong>Theorem</strong>. Let <span class="math-container">$X_n$</span> be random variables on <span class="math-container">$(\Omega, \mathcal{F}, \mathbb{P})$</span> with values in a Polish metric space <span class="math-container">$S$</span>. Suppose <span class="math-container">$X = (X_n)_{n \geq 1}$</span> is a stationary sequence. Then <span class="math-container">$X$</span> is ergodic if and only if for any bounded Borel measurable function <span class="math-container">$g: S^p \to \mathbb{R}$</span> with <span class="math-container">$p \geq 1$</span> an arbitrary integer,
<span class="math-container">$$\dfrac{1}{n}\sum_{m=0}^{n-1}g(X_{m+1}, \dots, X_{m+p}) \overset{a.s.}{\to} \mathbb{E}[g(X_1, \dots, X_p)]\text{.}$$</span></p>
</blockquote>
<p>Note that <span class="math-container">$\overset{a.s.}{\to}$</span> denotes almost sure convergence as <span class="math-container">$n \to \infty$</span>.</p>
<p>I have been trying to find this result in the 20-30 measure-theoretic probability books I have to no avail, as well as <em>An Introduction to Ergodic Theory</em> by Walters. Does anyone know of a textbook where I can find this result? I would strongly prefer a reference with a proof, but would be willing to take those without as well.</p>
<p><strong>Edit</strong>: Adding definitions as requested.</p>
<p>Given <span class="math-container">$X$</span> above, it is ergodic if for any invariant set <span class="math-container">$A \in \mathcal{F}$</span>, <span class="math-container">$\mathbb{P}(A) \in \{0, 1\}$</span>.</p>
<p>By "invariant set," we say a set <span class="math-container">$A \in \mathcal{F}$</span> is invariant with respect to <span class="math-container">$X$</span> if for some <span class="math-container">$B \in \mathcal{B}(\mathbb{R}^{\infty})$</span> (<span class="math-container">$\mathcal{B}(\mathbb{R}^{\infty})$</span> denoting the Borel <span class="math-container">$\sigma$</span>-algebra generated by <span class="math-container">$\mathbb{R}^{\infty}$</span>), <span class="math-container">$A = \{(X_n, X_{n+1}, X_{n+2}, \dots)\} \in B$</span> for all <span class="math-container">$n \geq 1$</span>.</p>
<p>[I suspect that <span class="math-container">$S^{\infty}$</span> should be used in place of <span class="math-container">$\mathbb{R}^{\infty}$</span> in the above definitions and that <span class="math-container">$\in$</span> should be <span class="math-container">$\subset$</span>, but that's how they are presented in the lecture notes.]</p>
<p><strong>Edit 2</strong>: I found this claim in some other sources, though not in great detail. It would be nice to find a textbook.</p>
<ul>
<li>Last sentence of <a href="http://www.columbia.edu/%7Eks20/6712-14/6712-14-Notes-Ergodic.pdf" rel="nofollow noreferrer">http://www.columbia.edu/~ks20/6712-14/6712-14-Notes-Ergodic.pdf</a></li>
<li><a href="https://onlinelibrary.wiley.com/doi/pdf/10.1002/9780470670057.app1" rel="nofollow noreferrer">Appendix A</a> of <em>GARCH Models: Structure, Statistical Inference and Financial Applications</em> uses the theorem above as the definition of an ergodic stationary process. This passage cites Billingsley (1995), which I assume is <em>Probability and Measure</em> - but I know that this theorem is not in there.</li>
</ul>
|
Kore-N
| 59,827 |
<p>Consider the case <span class="math-container">$p=1$</span> (allowing for <span class="math-container">$p>1$</span> seems unnecessary). By Birkhoff's ergodic theorem (<a href="http://math.uchicago.edu/%7Emay/REU2016/REUPapers/Ran.pdf" rel="nofollow noreferrer">http://math.uchicago.edu/~may/REU2016/REUPapers/Ran.pdf</a> , Theorem 7.1), for any measurable and bounded <span class="math-container">$g$</span>, the following convergence holds almost surely:
<span class="math-container">$$\frac{1}{n}\sum_{i=1}^n g(X_i) \to \mathbb{E}[g(X_1) |\mathcal{I}],$$</span>
where <span class="math-container">$\mathcal{I}$</span> is the <span class="math-container">$\sigma-$</span>algebra of invariant sets. From the definition of conditional expectation
<span class="math-container">$$\mathbb{E}[g(X_1) |\mathcal{I}] = \mathbb{E}[g(X_1)] \qquad \forall g \text{ measurable + bounded }$$</span>
is equivalent to <span class="math-container">$\mathcal{I}$</span> being the trivial <span class="math-container">$\sigma-$</span>algebra, which is in turn equivalent to ergodicity.</p>
|
2,846,114 |
<p>$$\int\frac{2}{x(3x-8)}dx=P\cdot \ln\left|x\right|+Q\cdot \ln\left|3x-8\right|$$</p>
<p>Find out what P and Q are equal to.</p>
<p>This is what I worked out:</p>
<p>$$\frac{A}{x}+\frac{B}{3x-8}=\frac{2}{x(3x-8)}$$
$$-\frac{1}{4}=A,\ \ \ \frac{3}{4}=B$$
$$P=A, Q=B$$</p>
<p>why is the answer $P=-\frac{1}{4}, Q=\frac{1}{4}$?</p>
|
mengdie1982
| 560,634 |
<p>Since $$\frac{{\rm d}(P\cdot \ln\left|x\right|+Q\cdot \ln\left|3x-8\right|)}{{\rm d}x}=\frac{P}{x}+\frac{3Q}{3x-8}=\frac{3(P+Q)x-8P}{x(3x-8)},$$</p>
<p>we obtain $$P+Q=0,~~-8P=2.$$
As a result, $$P=-\frac{1}{4}, ~~~Q=\frac{1}{4}.$$</p>
|
167,812 |
<p>I call a profinite group $G$ <strong><em>Noetherian</em></strong>, if evrey ascending chain of closed subgroups is eventually stable. A standart argument shows that every closed subgroup of a Noetherian profinite group is finitely generated.</p>
<p>A profinite group $G$ is called <strong><em>just-infinite</em></strong> if every nontrivial $M \lhd_c G$ is open.</p>
<p>Let $K$ be a profinite Noetherian just-infinite group. Must $K$ be the profinite completion of some residually finite group $R$?</p>
|
Andrei Jaikin
| 10,482 |
<p>The answer is <strong>yes</strong> in general.</p>
<p>Since $K$ is finitely generated, by the Nikolov-Segal theorem it coincides with its own profinite completion. So you simply may take $R=K$.</p>
<p>Perhaps, you also want $R$ to be finitely generated. In this case the answer is <strong>no</strong>. </p>
<p>If $K$ is a non-soluble p-adic analytic pro-$p$ group, then it has polynomial subgroup growth. However, finitely generated groups of polynomial subgroup growth are virtually soluble (see the book "Subgroup Growth" of Lubotzky and Segal).</p>
|
4,262,888 |
<p>My task is to prove that if an atomic measure space is <span class="math-container">$\sigma$</span>-finite, then the set of atoms must be countable.</p>
<p>This is my given definition of an atomic measure space:</p>
<blockquote>
<p>Assume <span class="math-container">$(X,\mathcal{M},\mu)$</span> is a measure space with all single points being measurable. An <strong>atom</strong> is a point <span class="math-container">$x$</span> with
<span class="math-container">$\mu(\{x\}) > 0$</span>. Letting <span class="math-container">$\mathcal{A}$</span> be the set of atoms,
<span class="math-container">$(X,\mathcal{M},\mu)$</span> is called <strong>atomic</strong> if <span class="math-container">$\mathcal{A}\in\mathcal{M}$</span>
and <span class="math-container">$\mu(\mathcal{A^c}) = 0$</span>.</p>
</blockquote>
<hr />
<p>I didn't know how to prove this at first, so I looked it up on stack exchange and found <a href="https://math.stackexchange.com/a/850597/933963">this answer</a>: (I do not have enough reputation to comment on the original post)</p>
<blockquote>
<p>Here's how to prove your claim, with the appropriate assumption. Let
<span class="math-container">$S\subset X$</span> be the set of atoms for some measure <span class="math-container">$\mu$</span> on <span class="math-container">$X$</span>. Let
<span class="math-container">$\{U_i\}$</span> be a countable measurable partition of <span class="math-container">$X$</span>. Then if <span class="math-container">$S$</span> is
uncountable, some <span class="math-container">$U_i$</span> contains an uncountable subset <span class="math-container">$S'$</span> of <span class="math-container">$S$</span>,
and <span class="math-container">$\mu(U_i)\geq \sum_{x\in S'}\mu(x)=\infty$</span> since any uncountable
sum of positive numbers diverges. Thus <span class="math-container">$\mu$</span> is not <span class="math-container">$\sigma$</span>-finite.</p>
</blockquote>
<p>My question is why do we have that <span class="math-container">$\mu(U_i) \geq \sum_{x\in S'} \mu(x)$</span> ? I am assuming that this inequality comes from subadditivity of <span class="math-container">$\mu$</span>
but as I have understood it subadditivity is defined for countable unions, not for uncountable unions so I am confused as to how we arrive at an uncountable sum in this step.</p>
|
Community
| -1 |
<p>You may argue as follows. Since <span class="math-container">$U_i\supset S'$</span> for some <span class="math-container">$i\in\mathbb{N}$</span>,
<span class="math-container">$$
\mu(U_i)\ge \mu(S'')\label{1}\tag{1}
$$</span>
for any <span class="math-container">$S''\subseteq S'$</span>. Let <span class="math-container">$\mathcal{S'}$</span> be the collection of all sequences of points in <span class="math-container">$S'$</span>. Then,
<span class="math-container">$$
\mu(U_i)\ge \sup_{s\in\mathcal{S'}}\sum_{x\in s}\mu(\{x\})\equiv\sum_{x\in S'}\mu(\{x\})=\infty,
$$</span>
where the inequality follows from \eqref{1} and countable additivity, and the last equality holds because there exists <span class="math-container">$\epsilon>0$</span> s.t. <span class="math-container">$\mu(\{x\})\ge \epsilon$</span> for uncountably many <span class="math-container">$x$</span>'s in <span class="math-container">$S'$</span>.</p>
|
465,999 |
<p>I'm not sure of this, can I have a constraint like this in a linear programming problem to be solved with simplex algorithm?</p>
<p>$$n_1t_1 + n_2t_2 > 200$$</p>
<p>where $n_1$ and $t_1$, $n_2$ and $t_2$ are different variables.</p>
|
user2566092
| 87,313 |
<p>This is an example of a quadratic (non-linear) constraint, and you can look up quadratically constrained quadratic programming (QCQP) e.g. on wikipedia if you want to learn more about how these kinds of problems with quadratic constraints and/or objectives are solved, although the computations are generally much slower and/or not guaranteed to find the best answer, in contrast to linear programming problems and algorithms. This is because unlike linear programming, QCQP is NP-hard (very unlikely that any fast algorithm exists that is guaranteed to find the optimal solution). Somebody else mentioned that you may be able to make substitutions and still use LP for some special situations similar to your example constraint, but if you have general quadratic constraints it won't always work, and you'll have to use QCQP instead of linear programming.</p>
|
365,287 |
<p>Let $([0,1],\mathcal{B},m)$ be the Borel sigma algebra with lebesgue measure and $([0,1],\mathcal{P},\mu)$ be the power set with counting measure. Consider the product $\sigma$-algebra on $[0,1]^2$ and product measure $m \times \mu$.</p>
<p>(1) Is $D=\{(x,x)\in[0,1]^2\}$ measurable?</p>
<p>(2) If so, what is $m \times \mu(D)$?</p>
<p>Edit: The product measure is defined on a rectangle by $m \times \mu(A\times B)=m(A)\mu(B)$ and on general set taking infimum over union of rectangles containing $D$. (Maybe we should use $0 \cdot \infty=0$)</p>
|
Peter Smith
| 35,151 |
<p>Again ('cos I've recommended it before) I can very warmly recommend getting your students to beg/borrow/buy and then <strong>read</strong> the excellent</p>
<blockquote>
<p>Daniel J. Velleman, <em>How to Prove it: A Structured Approach</em> (CUP, 1994 and much reprinted, and now into a second edition).</p>
</blockquote>
<p>From the blurb: "Many students have trouble the first time they take a mathematics course in which proofs play a significant role. This new edition of Velleman's successful text will prepare students to make the transition from solving problems to proving theorems by teaching them the techniques needed to read and write proofs." And in doing that, the most important thing, Velleman's book should get students to understand the lingo properly as they better understand what they are doing when they make a supposition here, draw an inference there, use a quantifier, etc. </p>
|
2,541,991 |
<p>I need to find a pair of dependent random variables $(X, Y)$ with covariance equal to $0.$ From this I gather:</p>
<p>$$0 = E((X-EX)(Y-EY)) = E \left(\left(X - \int_{-\infty}^\infty xf_X(x)\,dx\right) \left(Y - \int_{-\infty}^\infty xf_Y(x)\,dx \right)\right)$$</p>
<p>but what can I do now? How can I use the fact that they are dependent in the equation? Do you know of two such variables?</p>
|
Robert Israel
| 8,508 |
<p>Hint: try $X$ and $X^2$ where the distribution of $X$ is symmetric about $0$.</p>
|
3,548,064 |
<p>I have two equalities:
<span class="math-container">$$ \alpha x^{2} + \alpha y^{2} - y = 0 $$</span>
<span class="math-container">$$ \beta x^{2} + \beta y^{2} - x = 0 $$</span></p>
<p>Where <span class="math-container">$$ \alpha, \beta $$</span> are both known constants.</p>
<p>How can I solve for <span class="math-container">$x$</span> and <span class="math-container">$y$</span>? </p>
|
Eric Towers
| 123,905 |
<p>The number of solutions depends on how many of <span class="math-container">$\alpha$</span> and <span class="math-container">$\beta$</span> are zero.</p>
<ul>
<li>For any <span class="math-container">$\alpha$</span> and <span class="math-container">$\beta$</span>, <span class="math-container">$x = y = 0$</span> is a (trivial) solution.</li>
<li><span class="math-container">$\alpha = \beta = 0$</span>: Only the trivial solution occurs.</li>
<li><span class="math-container">$\alpha \neq 0$</span> or <span class="math-container">$\beta \neq 0$</span> (or both): A nontrivial solution also appears, <span class="math-container">$x = \frac{\beta}{\alpha^2 + \beta^2}$</span> and <span class="math-container">$y = \frac{\alpha}{\alpha^2 + \beta^2}$</span>.</li>
</ul>
|
164,002 |
<p>When I am reading a mathematical textbook, I tend to skip most of the exercises.
Generally I don't like exercises, particularly artificial ones.
Instead, I concentrate on understanding proofs of theorems, propositions, lemmas, etc..</p>
<p>Sometimes I try to prove a theorem before reading the proof.
Sometimes I try to find a different proof.
Sometimes I try to find an example or a counter-example.
Sometimes I try to generalize a theorem.
Sometimes I come up with a question and I try to answer it. </p>
<p>I think those are good "exercises" for me.</p>
<p><strong>EDIT</strong>
What I think is a very good "excercise" is as follows:</p>
<p>(1) Try to prove a theorem before reading the proof.</p>
<p>(2) If you have no idea to prove it, take a look <strong>a bit</strong> at the proof.</p>
<p>(3) Continue to try to prove it.</p>
<p>(4) When you are stuck, take a look <strong>a bit</strong> at the proof.</p>
<p>(5) Repeat (3) and (4) until you come up with a proof.</p>
<p><strong>EDIT</strong>
Another method I recommend rather than doing "homework type" exercises:
Try to write a "textbook" on the subject.
You don't have to write a real one.
I tried to do this on Galois theory.
Actually I posted "lecture notes" on Galois theory on an internet mathematics forum.
I believe my knowledge and skill on the subject greatly increased.</p>
<p>For example, I found <a href="https://math.stackexchange.com/questions/131757/a-proof-of-the-normal-basis-theorem-of-a-cyclic-extension-field">this</a> while I was writing "lecture notes" on Galois theory.
I could also prove that any profinite group is a Galois group.
This fact was mentioned in Neukirch's algebraic number theory.
I found later that Bourbaki had this problem as an exercise.
I don't understand its hint, though.
Later I found someone wrote a paper on this problem.
I made other small "discoveries" during the course.
I was planning to write a "lecture note" on Grothendieck's Galois theory.
This is an attractive plan, but has not yet been started.</p>
<p><strong>EDIT</strong>
If you want to have exercises, why not produce them yourself?
When you are learning a subject, you naturally come up with questions.
Some of these can be good exercises. At least you have the motivation not given by others. It is not homework.
For example, I came up with the following question when I was learning algebraic geometry.
I found that this was a good problem.</p>
<p>Let $k$ be a field.
Let $A$ be a finitely generated commutative algebra over $k$.
Let $\mathbb{P}^n = Proj(k[X_0, ... X_n])$.
Determine $Hom_k(Spec(A), \mathbb{P}^n)$.</p>
<p>As I wrote, trying to find examples or counter-examples can be good exercises, too.
For example, <a href="https://math.stackexchange.com/questions/133790/an-example-of-noncommutative-division-algebra-over-q-other-than-quaternion-alg">this</a> is a good exercise in the theory of division algebras.</p>
<p><strong>EDIT</strong>
Let me show you another example of self-exercises.
I encountered the following problem when I was writing a "lecture note" on Galois theory.</p>
<p>Let $K$ be a field.
Let $K_{sep}$ be a separable algebraic closure of $K$.
Let $G$ be the Galois group of $K_{sep}/K$.</p>
<p>Let $A$ be a finite dimensional algebra over $K$.
If $A$ is isomorphic to a product of fields each of which is separable over $K$, $A$ is called a finite etale algebra.
Let $FinEt(K)$ be the category of finite etale algebra over $K$.</p>
<p>Let $X$ be a finite set.
Suppose $G$ acts on $X$ continuously.
$X$ is called a finite $G$-set.
Let $FinSets(G)$ be the category of finite $G$-sets.</p>
<p><em>Then $FinEt(K)$ is anti-equivalent to $FinSets(G)$.</em></p>
<p>This is a zero-dimensional version of the main theorem of Grothendieck's Galois theory.
You can find the proof elsewhere, but I recommend you to prove it yourself.
It's not difficult and it's a good exercise of Galois theory.
<em>Hint</em>: Reduce it to the the case that $A$ is a finite separable extension of $K$ and X is a finite transitive $G$-set.</p>
<p><strong>EDIT</strong>
If you think this is too broad a question, you are free to add suitable conditions.
This is a soft question.</p>
|
nanme
| 33,515 |
<p>absolutely you can't truly say you understand something until you work through it</p>
|
3,078,176 |
<p>Given an equation in partial derivatives of the form <span class="math-container">$Af_x+Bf_y=\phi(x,y)$</span>, for example <span class="math-container">$$f_x-f_y=(x+y)^2$$</span> How do I know which change of coordinates is appropiate to solve the equation? In this example, the change of coordinates is <span class="math-container">$u=x+y$</span>, <span class="math-container">$v=x^2-y^2$</span>, why?</p>
|
Mostafa Ayaz
| 518,023 |
<p><strong>Hint</strong></p>
<p>use <span class="math-container">$$\sin \theta ={\tan\theta\over \sqrt{1+\tan^2\theta}}$$</span>whenever <span class="math-container">$\sin \theta , \tan \theta \ge 0$</span>.</p>
|
2,313,060 |
<p>$f(\bigcap_{\alpha \in A} U_{\alpha}) \subseteq \bigcap_{\alpha \in A}f(U_{\alpha})$</p>
<p>Suppose $y \in f(\bigcap_{\alpha \in A} U_{\alpha})$
$\implies f^{-1}(y) \in \bigcap_{\alpha \in A} U_{\alpha} \implies f^{-1}(y) \in U_{\alpha}$ for all $\alpha \in A$</p>
<p>$\implies y \in f (U_{\alpha})$ for all $\alpha \in A \implies y \in \bigcap_{\alpha \in A}f (U_{\alpha})$</p>
<p>$\bigcap_{\alpha \in A}f(U_{\alpha}) \subseteq f(\bigcap_{\alpha \in A} U_{\alpha})$</p>
<p>Suppose $y \in \bigcap_{\alpha \in A}f(U_{\alpha})\implies y \in f(U_{a}) $ for all $\alpha \in A$
$\implies f^{-1}(y)\in U_{a}$ for all $a\in A$</p>
<p>$ \implies f^{-1}(y)\in \bigcap_{a \in A}U_{a} \implies y \in f(\bigcap_{a \in A}(U_{a})$</p>
<p>Therefore</p>
<p>$f(\bigcap_{\alpha \in A} U_{\alpha}) \subseteq \bigcap_{\alpha \in A}f(U_{\alpha})$</p>
<p>Please let me know if my proof works, also I don't fully know how to do the following. Please give me some help. </p>
<p>Give an example of proper containment. Find a condition on f that would ensure equality.</p>
|
MarnixKlooster ReinstateMonica
| 11,994 |
<p>May I propose a 'logical' approach? Here is what happens if you expand the definitions, to go from the level of set theory to that of logic, and then reason on the logic level.</p>
<p>But first, what <em>are</em> the definitions?$%
\newcommand{\calc}{\begin{align} \quad &}
\newcommand{\op}[1]{\\ #1 \quad & \quad \unicode{x201c}}
\newcommand{\hints}[1]{\mbox{#1} \\ \quad & \quad \phantom{\unicode{x201c}} }
\newcommand{\hint}[1]{\mbox{#1} \unicode{x201d} \\ \quad & }
\newcommand{\endcalc}{\end{align}}
\newcommand{\subcalch}[1]{\\ \quad & \quad #1 \\ \quad &}
\newcommand{\subcalc}{\quad \begin{aligned} \quad & \\ \bullet \quad & }
\newcommand{\endsubcalc}{\end{aligned} \\ \\ \cdot \quad &}
\newcommand{\Ref}[1]{\text{(#1)}}
\newcommand{\then}{\Rightarrow}
\newcommand{\when}{\Leftarrow}
%$ The simplest formulations I know are $$
\tag{0}
y \in f\left[X\right] \;\equiv\; \langle \exists x : f(x) = y : x \in X \rangle
$$ and $$
\tag{1}
z \in \cap_{\alpha \in A} U_{\alpha} \;\equiv\; \langle \forall \alpha : \alpha \in A : z \in U_\alpha \rangle
$$ Here I'm using notations from Dijkstra et al. (see, e.g., <a href="https://www.cs.utexas.edu/users/EWD/transcriptions/EWD13xx/EWD1300.html" rel="nofollow noreferrer">EWD1300</a>), and I've changed your notation slightly, to allow us to distinguish $\;f\left[X\right]\;$ from $\;f(x)\;$. Using this notation, your statement is $$
\tag{2}
f\left[\cap_{\alpha \in A} U_{\alpha}\right] \;\subseteq\; \cap_{\alpha \in A}f\left[U_{\alpha}\right]
$$</p>
<hr>
<p>Let's first calculate which elements $\;y\;$ are in the left hand side of $\Ref{2}$:
$$\calc
y \in f\left[\cap_{\alpha \in A} U_{\alpha}\right]
\op=\hint{definition $\Ref{0}$}
\langle \exists x : f(x) = y : x \in \cap_{\alpha \in A} U_{\alpha} \rangle
\op=\hint{definition $\Ref{1}$}
\langle \exists x : f(x) = y : \langle \forall \alpha : \alpha \in A : x \in U_\alpha \rangle \rangle
\tag{L}
\endcalc$$</p>
<p>And then the same for the right hand side:
$$\calc
y \in \cap_{\alpha \in A}f\left[U_{\alpha}\right]
\op=\hint{definition $\Ref{1}$}
\langle \forall \alpha : \alpha \in A : y \in f\left[U_{\alpha}\right] \rangle
\op=\hint{definition $\Ref{0}$}
\langle \forall \alpha : \alpha \in A : \langle \exists x : f(x) = y : x \in U_{\alpha} \rangle \rangle
\tag{R}
\endcalc$$</p>
<p>Now, comparing $\Ref{L}$ and $\Ref{R}$, we immediately see that we can apply the logic rule $\;\exists\forall \then \forall\exists\;$, and conclude that $\;\Ref{L} \then\Ref{R}\;$ for all $\;y\;$. And by the definition of $\;\subseteq\;$ that proves $\Ref{2}$.</p>
<hr>
<p>Finally, note how this proof also clearly shows why not (in general) the $\;\supseteq\;$ inclusion direction is true: this is because we need to appeal to the fundamentally unidirectional rule $\;\exists\forall \then \forall\exists\;$.</p>
|
4,200,256 |
<p>Which numbers can be represented in the form of <span class="math-container">$x^2 + y^2$</span>, <span class="math-container">$x,y \in\mathbb N$</span>?</p>
<p>My approach : I found out that for this to satisfy, one of the prime factors of the number has to be congruent to <span class="math-container">$1$</span> (mod <span class="math-container">$4$</span>) . Is there any other elegant method for this</p>
|
Ibadul Qadeer
| 997,150 |
<p>Integers that in their prime factorization, contain either only odd primes of the form <span class="math-container">$4k+1$</span> or if there is a prime factor of the form <span class="math-container">$4k+3$</span>, then the multiplicity of this prime should be even, can be written as the sum of two squares.</p>
|
4,200,256 |
<p>Which numbers can be represented in the form of <span class="math-container">$x^2 + y^2$</span>, <span class="math-container">$x,y \in\mathbb N$</span>?</p>
<p>My approach : I found out that for this to satisfy, one of the prime factors of the number has to be congruent to <span class="math-container">$1$</span> (mod <span class="math-container">$4$</span>) . Is there any other elegant method for this</p>
|
Geoffrey Trang
| 684,071 |
<p>Let <span class="math-container">$A$</span> be the set of all prime numbers of the form <span class="math-container">$4k+3$</span>. Then, <span class="math-container">$n$</span> is the sum of two squares if and only if <span class="math-container">$\forall p \in A\, p \mid n \implies 2 \mid \nu_p(n)$</span>, where <span class="math-container">$\nu_p(n)$</span> is the <span class="math-container">$p$</span>-adic valuation of <span class="math-container">$n$</span>. In other words, <span class="math-container">$\nu_p(n)$</span> must be an even number for all primes <span class="math-container">$p$</span> of the form <span class="math-container">$4k+3$</span> dividing <span class="math-container">$n$</span>.</p>
|
4,200,256 |
<p>Which numbers can be represented in the form of <span class="math-container">$x^2 + y^2$</span>, <span class="math-container">$x,y \in\mathbb N$</span>?</p>
<p>My approach : I found out that for this to satisfy, one of the prime factors of the number has to be congruent to <span class="math-container">$1$</span> (mod <span class="math-container">$4$</span>) . Is there any other elegant method for this</p>
|
Jack D'Aurizio
| 44,121 |
<p>I will outline a proof of the characterization already given by Geoffrey Trang.</p>
<p><strong>Step 1</strong>. The primes of the form <span class="math-container">$x^2+y^2$</span> are <span class="math-container">$2$</span> and the primes <span class="math-container">$\equiv 1\pmod{4}$</span>.<br>
Trivially, a prime <span class="math-container">$\equiv 3\pmod{4}$</span> cannot be represented by <span class="math-container">$x^2+y^2$</span> and <span class="math-container">$2=1^2+1^2$</span>.<br> If <span class="math-container">$p\equiv 1\pmod{4}$</span> the Legendre symbol <span class="math-container">$\left(\frac{-1}{p}\right)=(-1)^{\frac{p-1}{2}}$</span> gives that <span class="math-container">$-1$</span> is a quadratic residue in <span class="math-container">$\mathbb{F}_p$</span>, so for some integer <span class="math-container">$a\leq\frac{p-1}{2}$</span> we have <span class="math-container">$a^2+1= kp$</span> with <span class="math-container">$k<\frac{p}{4}$</span>. The fact that the norm in <span class="math-container">$\mathbb{Z}[i]$</span> is multiplicative leads to Lagrange's identity
<span class="math-container">$$ (a^2+b^2)(c^2+d^2) = (ad-bc)^2+(ac+bd)^2. $$</span>
By Fermat's descent we are able to go from <span class="math-container">$a^2+1=kp$</span> to <span class="math-container">$a^2+b^2=p$</span>: every prime <span class="math-container">$\equiv 1\pmod{4}$</span> can be written as a sum of two squares in a essentially unique way. By Lagrange's identity once again we have that any <span class="math-container">$n$</span> without prime factors of the form <span class="math-container">$4k+3$</span> can be written as a sum of two squares. The same holds if for every prime divisor <span class="math-container">$p$</span> of the form <span class="math-container">$4k+3$</span> we have <span class="math-container">$\nu_p(n)\equiv 0\pmod{2}$</span>.</p>
<p><strong>Step 2</strong>. If there is a prime divisor <span class="math-container">$p$</span> of the form <span class="math-container">$4k+3$</span>, such that <span class="math-container">$\nu_p(n)$</span> is odd, <span class="math-container">$n$</span> cannot be written as a sum of two squares. In <span class="math-container">$\mathbb{F}_p$</span> we have that <span class="math-container">$-1$</span> is not a quadratic residue, hence <span class="math-container">$x^2+y^2\equiv 0\pmod{p}$</span> implies that both <span class="math-container">$x$</span> and <span class="math-container">$y$</span> are multiples of <span class="math-container">$p$</span>. In turn this implies that <span class="math-container">$\nu_p(n)$</span> is even, contradiction.</p>
<p><strong>Note</strong>. <span class="math-container">$21\equiv 1\pmod{4}$</span> but it cannot be written as a sum of two squares, since <span class="math-container">$\nu_3(21)$</span> is odd.<br>
Something similar applies to <span class="math-container">$2021=43\cdot 47$</span>.</p>
|
3,197,683 |
<p>Here is the theorem that I need to prove</p>
<blockquote>
<p>For <span class="math-container">$K = \mathbb{Q}[\sqrt{D}]$</span> we have</p>
<p><span class="math-container">$$\begin{align}O_K = \begin{cases}
\mathbb{Z}[\sqrt{D}] & D \equiv 2, 3 \mod 4\\
\mathbb{Z}\left[\frac{1 + \sqrt{D}}{2}\right] & D \equiv 1 \mod 4
\end{cases}
\end{align}$$</span></p>
</blockquote>
<p>The theorem we need to use is this one that can be found in any generic number theory textbook.</p>
<blockquote>
<p>an element <span class="math-container">$\alpha\in K$</span> is an algebraic integer if and only if its minimal polynomial has coefficients in <span class="math-container">$\mathbb{Z}$</span>.</p>
</blockquote>
<p>I tried many avenues of attack but it is extremely hard to prove. How do I prove it?</p>
|
bsbb4
| 337,971 |
<p><span class="math-container">$\bullet$</span> Let <span class="math-container">$d \equiv 1 \mod 4.$</span></p>
<p>Let <span class="math-container">$A=x+y\sqrt{d} \in \mathcal O_K$</span> be arbitrary. We have</p>
<p><span class="math-container">$(A-x)^2 - dy^2 = 0$</span>. Expanding,</p>
<p><span class="math-container">$A^2 -2xA +x^2 - dy^2 = 0$</span>.</p>
<p>Now we use the fact that <span class="math-container">$m_A(x)$</span>, the minimal polynomial of <span class="math-container">$A$</span>, must be in <span class="math-container">$\Bbb Z[x]$</span>: we must have <span class="math-container">$x=\frac{r}{2}$</span> for some <span class="math-container">$r \in \Bbb Z$</span>. </p>
<p>Since <span class="math-container">$x^2 - dy^2 \in \Bbb Z$</span> too, we must have <span class="math-container">$y = \frac{s}{2}$</span>. </p>
<p>So <span class="math-container">$x^2 - dy^2 = \frac{r^2}{4} -d\frac{s^2}{4}$</span>. </p>
<p>This implies <span class="math-container">$r^2 - ds^2 \equiv r^2 - s^2 \equiv 0 \mod 4$</span>.</p>
<p>So <span class="math-container">$r \equiv s \mod 2$</span>, i.e. <span class="math-container">$r=s+2t$</span>.</p>
<p>Putting this together, <span class="math-container">$A=\frac{r}{2} + \frac{s}{2}\sqrt{d} = t+s\frac{1+\sqrt{d}}{2}$</span> as required, showing that <span class="math-container">$\mathcal O_K = \Bbb Z[\frac{1+\sqrt{d}}{2}]$</span>.</p>
<p><span class="math-container">$\bullet$</span> Now assume <span class="math-container">$d \equiv 2 \mod 4$</span>.</p>
<p>We can repeat the above argument, arriving at</p>
<p><span class="math-container">$r^2 +2s^2 \equiv 0 \mod 4$</span>.</p>
<p>The only solution is <span class="math-container">$(r, s) = (0, 0) \mod 4$</span>, so in fact <span class="math-container">$x, y \in \Bbb Z$</span> and <span class="math-container">$\mathcal O_K = \Bbb Z[\sqrt{d}]$</span>.</p>
<p><span class="math-container">$\bullet$</span> For <span class="math-container">$d \equiv 3 \mod 4$</span> we have to consider <span class="math-container">$r^2 + s^2 \equiv 0 \mod 4$</span> and we come to the same conclusion as for <span class="math-container">$d=2$</span>.</p>
|
4,441,034 |
<blockquote>
<p>Consider function <span class="math-container">$f(x)$</span> whose derivative is continuous on the interval <span class="math-container">$[-3; 3]$</span> and the graph of the function <span class="math-container">$y = f'(x)$</span> is pictured below. Given that <span class="math-container">$g(x) = 2f(x) + x^2 + 4$</span> and <span class="math-container">$f(1) = -24$</span>, how many real roots of the equation <span class="math-container">$g(x) = 0$</span> are there on the interval <span class="math-container">$[-3; 3]$</span>?</p>
</blockquote>
<p><a href="https://i.stack.imgur.com/yMyHY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yMyHY.png" alt="enter image description here" /></a></p>
<p>[For context, this question is taken from an exam whose format consists of 50 multiple-choice questions with a time limit of 90 minutes. Calculators are the only electronic device allowed in the testing room. (You know those scientific calculators sold at stationery stores and sometimes bookstores? They are the goods.) I need a solution that works within these constraints. Thanks for your cooperation, as always. (Do I need to sound this professional?)</p>
<p>By the way, if the wording of the problem sounds rough, sorry for that. I'm not an expert at translating documents.]</p>
<p>Excuse my waterlogged, dropped-on-the-floor-too-many-times, memory-of-a-goldfish-with-dementia brick camera quality.</p>
<p>Anyhow, first of all, <span class="math-container">$g(1) = 2f(1) + 1^2 + 4 = -43$</span> and <span class="math-container">$$g'(x) = 0 \iff 2[f'(x) + x] = 0 \iff \left[ \begin{aligned} x &= -3\\ x &= 1\\ x&= 3 \end{aligned} \right.$$</span></p>
<p>From which, we can draw the table of variations (<em>in glorious Technicolor</em>).</p>
<p><a href="https://i.stack.imgur.com/gCTc2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gCTc2.png" alt="enter image description here" /></a></p>
<p>Now, we just need to know whether <span class="math-container">$2f(\pm 3) + 13$</span> are greater than zero. And this is where I have a problem. How am I supposed to know?</p>
<p>Perhaps it's through the values of <span class="math-container">$\displaystyle \int_1^{\pm 3}f'(x)\, \mathrm dx = f(\pm 3) - f(1)$</span>, which could be somewhat perceived in the graph itself, but I am not entirely sure.</p>
<p>Anyhow, thanks for reading, (and even more if you could help~)</p>
|
user2661923
| 464,411 |
<p>In my opinion, the best way (for me anyway) to tackle this problem is to take off my shoes, and stretch my intuition with a specific example.</p>
<p>Suppose that <span class="math-container">$n = 6$</span>, and you compute</p>
<p><span class="math-container">$$\sum_{k=1}^6 \sum_{d|k} 1 = 1 + 2 + 2 + 3 + 2 + 4 = 14. \tag1 $$</span></p>
<p>In (1) above:</p>
<ul>
<li>the factor of <span class="math-container">$(1)$</span> is counted <span class="math-container">$6$</span> times.</li>
<li>the factor of <span class="math-container">$(2)$</span> is counted <span class="math-container">$(3)$</span> times.</li>
<li>the factor of <span class="math-container">$(3)$</span> is counted <span class="math-container">$(2)$</span> times.</li>
<li>the factors of the numbers <span class="math-container">$(4), (5),$</span> and <span class="math-container">$(6)$</span> are each counted once.</li>
</ul>
<p>For any real number <span class="math-container">$r$</span>, let <span class="math-container">$f(r)$</span> denote <span class="math-container">$\lfloor r\rfloor$</span> (i.e. the <em>floor</em> of <span class="math-container">$r$</span>), which is the largest integer <span class="math-container">$\leq r$</span>.</p>
<p>So, you can (in effect) reverse the order of summation by expressing it as</p>
<p><span class="math-container">$$\sum_{d=1}^n \sum_{k=1}^{f\left(\frac{n}{d}\right)} 1. \tag2 $$</span></p>
<p>The reason that (2) above represents a reversal of the order of summation of (1) above is that in (2) above, the outer summation represents the potential divisors, while the inner summation represents the upper bound of the number of times that a specific divisor should be <em>counted</em>.</p>
|
4,441,034 |
<blockquote>
<p>Consider function <span class="math-container">$f(x)$</span> whose derivative is continuous on the interval <span class="math-container">$[-3; 3]$</span> and the graph of the function <span class="math-container">$y = f'(x)$</span> is pictured below. Given that <span class="math-container">$g(x) = 2f(x) + x^2 + 4$</span> and <span class="math-container">$f(1) = -24$</span>, how many real roots of the equation <span class="math-container">$g(x) = 0$</span> are there on the interval <span class="math-container">$[-3; 3]$</span>?</p>
</blockquote>
<p><a href="https://i.stack.imgur.com/yMyHY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yMyHY.png" alt="enter image description here" /></a></p>
<p>[For context, this question is taken from an exam whose format consists of 50 multiple-choice questions with a time limit of 90 minutes. Calculators are the only electronic device allowed in the testing room. (You know those scientific calculators sold at stationery stores and sometimes bookstores? They are the goods.) I need a solution that works within these constraints. Thanks for your cooperation, as always. (Do I need to sound this professional?)</p>
<p>By the way, if the wording of the problem sounds rough, sorry for that. I'm not an expert at translating documents.]</p>
<p>Excuse my waterlogged, dropped-on-the-floor-too-many-times, memory-of-a-goldfish-with-dementia brick camera quality.</p>
<p>Anyhow, first of all, <span class="math-container">$g(1) = 2f(1) + 1^2 + 4 = -43$</span> and <span class="math-container">$$g'(x) = 0 \iff 2[f'(x) + x] = 0 \iff \left[ \begin{aligned} x &= -3\\ x &= 1\\ x&= 3 \end{aligned} \right.$$</span></p>
<p>From which, we can draw the table of variations (<em>in glorious Technicolor</em>).</p>
<p><a href="https://i.stack.imgur.com/gCTc2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gCTc2.png" alt="enter image description here" /></a></p>
<p>Now, we just need to know whether <span class="math-container">$2f(\pm 3) + 13$</span> are greater than zero. And this is where I have a problem. How am I supposed to know?</p>
<p>Perhaps it's through the values of <span class="math-container">$\displaystyle \int_1^{\pm 3}f'(x)\, \mathrm dx = f(\pm 3) - f(1)$</span>, which could be somewhat perceived in the graph itself, but I am not entirely sure.</p>
<p>Anyhow, thanks for reading, (and even more if you could help~)</p>
|
epi163sqrt
| 132,007 |
<p>Here we change the order of summation by transformations of the sums only.</p>
<blockquote>
<p>We obtain
<span class="math-container">\begin{align*}
\color{blue}{\sum_{k=1}^nd(k)}&=\sum_{k=1}^n\sum_{{d=1}\atop{d\mid k}}^k1
=\sum_{{1\leq d\leq k\leq n}\atop {d\mid k}}1\tag{1}\\
&=\sum_{d=1}^n\sum_{{k=d}\atop{d\mid k}}^n 1\tag{2}\\
&=\sum_{d=1}^n\sum_{{k=d}\atop {dd^{\prime}=k}}^n1\tag{3}\\
&=\sum_{d=1}^n\sum_{d^{\prime}=1}^{\left\lfloor n/d\right\rfloor} 1\tag{4}\\
&\,\,\color{blue}{=\sum_{d=1}^n\left\lfloor\frac{n}{d}\right\rfloor}\tag{5}
\end{align*}</span></p>
</blockquote>
<p><em>Comment:</em></p>
<ul>
<li><p>In (1) we write the index range somewhat more conveniently.</p>
</li>
<li><p>In (2) we change the order of summation.</p>
</li>
<li><p>In (3) we introduce <span class="math-container">$d^\prime$</span> using the definition of the divisor <span class="math-container">$d$</span>.</p>
</li>
<li><p>In (4) we sum over <span class="math-container">$d^\prime$</span> instead of <span class="math-container">$k$</span>. We observe <span class="math-container">$d^\prime=1$</span> if <span class="math-container">$k=d$</span>, <span class="math-container">$d^\prime=2$</span> if <span class="math-container">$k=2d$</span>, etc.</p>
</li>
<li><p>In (5) we make a final simplification.</p>
</li>
</ul>
<p>This is a special case of the following identity
<span class="math-container">\begin{align*}
\color{blue}{\sum_{k=1}^n\sum_{d\mid k}f(d,k)=\sum_{d=1}^n\sum_{k=1}^{\left\lfloor n/d\right\rfloor}f(d,kd)}
\end{align*}</span>
which is shown in <em><a href="https://math.stackexchange.com/questions/2988770/interchanging-summation-involving-divisors-in-index/2989265#2989265">this answer</a></em>.</p>
|
3,882,214 |
<p>I have a question that involves an Argand diagram. The complex number <strong>u = 1 + 1i</strong> is the center of that circle, and the radius is one. In other words, <span class="math-container">$$|z - (1 + 1i)| = 1$$</span></p>
<p>Now my issue is the following: I need to <strong>calculate the least value of |z| for the points on this locus using the diagram</strong>. Here's the sketch:</p>
<p><a href="https://i.stack.imgur.com/RPa9e.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RPa9e.png" alt="enter image description here" /></a></p>
<p>So how do I find that least |z|? I understand that it'll involve a tangent to the circle, and I assume it's on the bottom right side of the circle, closest to the origin, but I'm not sure how to go about doing this.</p>
|
Tryst with Freedom
| 688,539 |
<p>Hint: Join a line from the origin to the center of the circle, once that is done ... label each term geometrically.</p>
<p>Note: <span class="math-container">$|z|$</span> is distance of the point <span class="math-container">$ z$</span> from the origin</p>
|
2,094,657 |
<p>I found this interesting problem on AoPS forum but no one has posted an answer. I have no idea how to solve it.</p>
<blockquote>
<p>$$
\int_0^\infty \sin(x^n)\,dx
$$
For all positive rationals $n>1$, $I_n$ denotes the integral as above.</p>
<p>If $P_n$ denotes the product
$$
P_n=\prod_{r=1}^{n-1}I_{\bigl(\!\frac{n}{r}\!\bigr)}\,,
$$
then evaluate the following limit $L$
$$
L=\lim_{n\to\infty}\bigl(\sqrt{n}\,P_n\bigr)^{\frac{1}{n}}
$$</p>
</blockquote>
|
tired
| 101,233 |
<p>The integral $I(a)=\int_0^{\infty}\sin(x^a)=\Im\int_0^{\infty}\exp(ix^a)$ is easily calulated by using the analyticality of the integrand: Rotate the the contour of integration by an angle of $\frac{\pi}{2a}$ to get $I(a)=\Im\left(e^{\frac{i\pi}{2a}}\int_0^{\infty}e^{-x^a} \right)$ or </p>
<blockquote>
<p>$$
I(a)=\frac{1}{a}\Gamma\left(\frac{1}{a}\right)\sin\left(\frac{\pi}{2a}\right)
$$</p>
</blockquote>
<p>Now let us have a look at </p>
<p>$$
P_n=\left(n^{\frac12}\prod_{r=1}^{n-1}I\left(\frac{n}r\right)\right)^{\frac1n}=n^{\frac1{2n}}\exp\left(\frac1n\underbrace{\sum_{r=1}^{n-1}\log\left(I\left(\frac{n}{r}\right)\right)}_{s_{n-1}}\right)
$$</p>
<p>Now we observe that $I(1)=0$ so $s_n=s_{n-1}$ so </p>
<p>$$
\frac{1}{n}s_{n-1}=\frac{1}{n}s_n=\frac{1}{n}\sum_{r=1}^{n}\log\left(\frac{r}{n}\sin\left(\frac{\pi r}{2n}\right)\Gamma\left(\frac{r}{n}\right)\right)
$$</p>
<p>which is the defintion of a Riemannian integral. Since also the prefactor in $P_n$ is finite ($n^{\frac{1}{2n}}\sim_{\infty}1$) we can split the limit</p>
<blockquote>
<p>$$
L=\lim_{n\rightarrow\infty}P_n=\lim_{n\rightarrow\infty}n^{\frac{1}{2n}}\lim_{n\rightarrow\infty}e^{\frac{1}{n}s_n}=1\cdot e^J \,\,\quad \left(\star\right)
$$</p>
</blockquote>
<p>where $J$ is given by</p>
<p>$$
J=\int_0^1\log(I(x))dx=\\\int_0^1\log(x)dx+\int_0^1\log\left(\sin\left(\frac{\pi }{2}x\right)\right)dx+\int_0^1\log\left(\Gamma(x)\right)dx
$$</p>
<p>or </p>
<p>$$
J=-1-\log(2)+\frac{1}{2}\log(2\pi)
$$</p>
<p>where we used a classic result by <a href="https://math.stackexchange.com/questions/1338801/how-to-prove-raabes-formula">Raabe</a> as well as <a href="https://math.stackexchange.com/questions/37829/computing-the-integral-of-log-sin-x">this all time favourite</a>. </p>
<p>We now can conclude from $\left(\star\right)$ that</p>
<blockquote>
<p>$$
L=\frac{\sqrt{2\pi}}{2e}
$$</p>
</blockquote>
<p>in accordance with @math110</p>
|
76,505 |
<p>In the eighties, Grothendieck devoted a great amount of time to work on the foundations of homotopical algebra. </p>
<p>He wrote in "Esquisse d'un programme": "[D]epuis près d'un an, la plus grande partie de mon énergie a été consacrée à un travail de réflexion sur les <em>fondements de l'algèbre (co)homologique non commutative</em>, ou ce qui revient au même, finalement, de l'<em>algèbre homotopique</em>." (Beginning of section 7. English version <a href="http://matematicas.unex.es/~navarro/res/esquisseeng.pdf" rel="noreferrer">here</a>: "Since the month of March last year, so nearly a year ago, the greater part of my energy has been devoted to a work of reflection on the foundations of non-commutative (co)homological algebra, or what is the same, after all, of homotopic[al] algebra.) </p>
<p>In <a href="http://www.math.jussieu.fr/~maltsin/groth/ps/lettreder.pdf" rel="noreferrer">a letter to Thomason</a> written in 1991, he states: "[P]our moi le “paradis originel” pour l’algèbre topologique n’est nullement la sempiternelle catégorie ∆∧ semi-simpliciale, si utile soit-elle, et encore moins celle des espaces topologiques (qui l’une et l’autre s’envoient dans la 2-catégorie des topos, qui en est comme une enveloppe commune), mais bien la catégorie Cat des petites caégories, vue avec un œil de géomètre par l’ensemble d’intuitions, étonnamment riche, provenant des topos." [EDIT 1: Terrible attempt of translation, otherwise some people might miss the reason why I have asked this question: "To me, the "original paradise" for topological algebra is by no means the never-ending semi-simplicial category ∆∧ [he means the simplex category], for all its usefulness, and even less is it the category of topological spaces (both of them imbedded in the 2-category of toposes, which is a kind of common enveloppe for them). It is the category of small categories Cat indeed, seen through the eyes of a geometer with the set of intuitions, surprisingly rich, arising from toposes."]</p>
<p>If $Hot$ stands for the classical homotopy category, then we can see $Hot$ as the localization of $Cat$ with respect to functors of which the topological realization of the nerve is a homotopy equivalence (or equivalently a topological weak equivalence). This definition of $Hot$ still makes use of topological spaces. However, topological spaces are in fact not necessary to define $Hot$. Grothendieck defines a <em>basic localizer</em> as a $W \subseteq Fl(Cat)$ satisfying the following properties: $W$ is weakly saturated; if a small category $A$ has a terminal object, then $A \to e$ is in $W$ (where $e$ stands for the trivial category); and the relative version of Quillen Theorem A holds. This notion is clearly stable by intersection, and Grothendieck conjectured that classical weak equivalences of $Cat$ form the smallest basic localizer. This was proved by Cisinski in his thesis, so that we end up with a categorical definition of the homotopy category $Hot$ without having mentionned topological spaces. (Neither have we made use of simplicial sets.) </p>
<p>I personnally found what Grothendieck wrote on the subject quite convincing, but of course it is a rather radical change of viewpoint regarding the foundations of homotopical algebra. </p>
<p>A related fact is that Grothendieck writes in "Esquisse d'un programme" that "la "<em>topologie générale</em>" <em>a été développée</em> (dans les années trente et quarante) <em>par des analystes et pour les besoins de l'analyse</em>, non pour les besoins de la topologie proprement dite, c'est à dire l'étude des <em>propriétés topologiques de formes géométriques</em> diverses". ("[G]eneral topology” was developed (during the thirties and forties) by analysts and in order to meet the needs of analysis, not for topology per se, i.e. the study of the topological properties of the various geometrical shapes." See the link above.) This sentence has already been alluded to on MO, for instance in Allen Knutson's answer <a href="https://mathoverflow.net/questions/8204/how-can-i-really-motivate-the-zariski-topology-on-a-scheme/14354#14354">there</a> or Kevin Lin's comment <a href="https://mathoverflow.net/questions/14314/algebraic-topologies-like-the-zariski-topology">there</a>. </p>
<p>So much for the personal background of this question.</p>
<p>It is not new that $Top$, the category of all topological spaces and continuous functions, does not possess all the desirable properties from the geometric and homotopical viewpoint. For instance, there are many situations in which it is necessary to restrict oneself to some subcategory of $Top$. I expect there are many more instances of "failures" of $Top$ from the homotopical viewpoint than the few I know of, and I would like to have a list of such "failures", from elementary ones to deeper or less-known ones. I do not give any example myself on purpose, but I hope the question as stated below is clear enough. Here it is, then: </p>
<blockquote>
<p>In which situations is it noticeable that $Top$ (the category of general topological spaces and continuous maps) is not adapted to geometric or homotopical needs? Which facts "should be true" but are not? And what do people usually do when encountering such situations? </p>
</blockquote>
<p>As usual, please post only one answer per post so as to allow people to upvote or downvote single answers.</p>
<p>P.S. I would like to make sure that nobody interpret this question as "why should we get rid of topological spaces". This, of course, is not what I have in mind! </p>
|
Ronnie Brown
| 19,949 |
<p>My answer is in agreement with Grothendieck that topological spaces may be seen as inadequate for many geometric, and in particular, homotopical purposes. Round about 1970, I spent 9 years trying to generalise the fundamental groupoid of a topological space to dimension 2, using a notion of double groupoid to reflect the idea of ``algebraic inverse to subdivision'' and in the hope of proving a 2-dimensional van Kampen type theorem. In discussion with Philip Higgins in 1974 we agreed that: </p>
<p>1) Whitehead's theorem on free crossed modules, that $\pi_2(X \cup \{e^2_\lambda\},X,x)$ was a free crossed $\pi_1(X,x)$-module, was an instance of a 2-dimensional universal property in homotopy theory. </p>
<p>2) If our proposed theories were to be any good, then Whitehead's theorem should be a corollary. </p>
<p>However we observed that Whitehead's theorem was about <em>relative homotopy groups</em>. So we tried to define a homotopy double groupoid of a <em>pair of pointed spaces</em>, mapping a square into $X$ in which the edges go to $A$ and the vertices to the base point, and taking homotopy classes of such maps. This worked like a dream, and we were able to formulate and prove our theorem, published after some delays (and in the teeth of opposition!) in 1978. </p>
<p>We could then see how to generalise this to filtered spaces, but the proofs needed new ideas, and were published in 1981; this and subsequent work has evolved into the book ``Nonabelian algebraic topology'' published last August. </p>
<p>Contact with Loday who had defined a special kind of $(n+1)$-fold groupoid for an $n$-cube of spaces led to a more powerful van Kampen Theorem, with a totally different type of proof, published jointly in 1987. This allows for calculations of some homotopy $n$-types, and has as a Corollary an $n$-ad connectivity theorem, with a calculation of the critical (nonabelian!) $n$-ad homotopy group, as has been made more explicit by Ellis and Steiner, using the notion of a crossed $n$-cube of groups. </p>
<p>Thus we could get useful strict homotopy multiple groupoids for kinds of structured spaces, allowing calculations not previously possible. </p>
<p>In this way, Grothendieck's view is verified that as spaces with some kind of structure arise naturally in geometric situations, there should be advantages if the algebraic methods take proper cognisance of this structure from the start. That is, one should consider the data which define the space of interest. </p>
|
4,522,097 |
<p><span class="math-container">$$
X = \begin{pmatrix}
1+b_1 & 1 & 0 & 0 & 0 & \frac{1}{a_{6}} \\
1+b_2 & 1 & 1 & 0 & 0 & -\frac{a_1}{a_6} \\
b_3 & 1 & 1 & 1 & 0 & -\frac{a_2}{a_6} \\
b_4 & 0 & 1 & 1 & 1 & -\frac{a_3}{a_6} \\
b_5 & 0 & 0 & 1 & 1 & 1-\frac{a_4}{a_6} \\
b_6 & 0 & 0 & 0 & 1 & 1-\frac{a_5}{a_6}
\end{pmatrix}$$</span></p>
<p>The Schur complement w.r.t. the first and last row/column gives</p>
<p><span class="math-container">$$S = \begin{pmatrix}
1+b_1 & \frac{1}{a_6}\\
b_6 & 1 - \frac{a_5}{a_6}
\end{pmatrix}
-\begin{pmatrix} 1 & 0 & 0 & 0\\ 0 & 0 & 0 &1 \end{pmatrix} \begin{pmatrix} 1 & 1 & 0 & 0\\ 1 & 1 & 1 & 0\\ 0 &1 &1&1\\0&0&1&1 \end{pmatrix}^{-1}
\begin{pmatrix}1+b_2 & -\frac{a_1}{a_6} \\ b_3 & -\frac{a_2}{a_6}\\ b_4 & -\frac{a_3}{a_6}\\ b_5 & 1-\frac{a_4}{a_6}\end{pmatrix}.$$</span></p>
<p><span class="math-container">$$S = \begin{pmatrix} b_1 - b_2 + b_4 - b_5 & - \frac{-1-a_1+a_3-a_4+a_6}{a_6}\\-1-b_2+b_3-b_5+b_6 & \frac{a_1-a_2 + a_4 - a_5}{a_6}\end{pmatrix}$$</span></p>
<p>Then <span class="math-container">$\det(X) = \det\Biggr(\begin{pmatrix} 1 & 1 & 0 & 0\\ 1 & 1 & 1 & 0\\ 0 &1 &1&1\\0&0&1&1 \end{pmatrix}\Biggl). \det(S)$</span>.</p>
<p>How matrix <span class="math-container">$S$</span> is obtained? I am not sure why and how to take the blocks here. How <span class="math-container">$S$</span> is derived? I can see that in matrix <span class="math-container">$M$</span> but how to approach it in matrix <span class="math-container">$X$</span>?</p>
<p>Suppose <span class="math-container">$M = \begin{pmatrix} A & B \\ C & D\end{pmatrix}$</span>. The Schur complement of <span class="math-container">$D
$</span> w.r.t <span class="math-container">$M$</span> is given by <span class="math-container">$M/D = A - B D^{-1} C$</span>. It is easy to see when there are contiguous blocks. But when they are not contiguous, how we apply the formula? Like how we get matrix <span class="math-container">$S$</span>. But I am not sure how to connect this with matrix <span class="math-container">$X$</span>.</p>
<p>How matrix <span class="math-container">$X$</span> will be different from <span class="math-container">$Y$</span> below, by clubbing the blocks together the blocks used to construct matrix <span class="math-container">$S$</span>.</p>
<p><span class="math-container">$$
Y = \begin{pmatrix}
1+b_1 & \frac{1}{a_6} & 1 & 0 & 0 & 0 \\
b_6 & 1-\frac{a_5}{a_6} & 0 & 0 & 0 & 1 \\
1+b_2 & -\frac{a_1}{a_6} & 1 & 1 & 0 & 0 \\
b_3 & -\frac{a_2}{a_6} & 1 & 1 & 1 & 0 \\
b_4 & -\frac{a_3}{a_6} & 0 & 1 & 1 & 1\\
b_6 & 1-\frac{a_4}{a_6} & 0 & 0 & 1 & 1
\end{pmatrix}$$</span></p>
|
Andrew D. Hwang
| 86,418 |
<p>tl; dr: It's arguably impossible to answer a philosophical question about definitions, but if this question came up during a chat over beverages, I'd say</p>
<ol>
<li>We know how to do calculus on (non-empty open subsets of) Cartesian spaces.</li>
<li>The definition of a smooth manifold uses this knowledge to extend the concept of smoothness.</li>
<li>We know how to use calculus to measure arc length, area, and so forth, in a manifold embedded in a Euclidean space.</li>
<li>A certain amount of ambient Euclidean structure can be restricted to an embedded manifold, and then viewed as data intrinsic to the manifold, separately from a particular embedding.</li>
</ol>
<hr />
<p>Regarding point 2., we use coordinate charts (or parametrizations, their mapping inverses) to transfer questions about functions and mappings on manifolds to questions about functions and mappings on Cartesian space. In order for the concepts such as <em>smoothness</em> to be well-defined (independent of chart) we impose a compatibility condition between parametrizations <span class="math-container">$\mathbf{x}_{1}$</span> and <span class="math-container">$\mathbf{x}_{2}$</span> amounting to smoothness of <span class="math-container">$\mathbf{x}_{2}^{-1} \circ \mathbf{x}_{1}$</span>.</p>
<p>For point 4., thinking specifically of surfaces in Euclidean three-space, a parametrization induces three functions <span class="math-container">$E$</span>, <span class="math-container">$F$</span>, and <span class="math-container">$G$</span> defined in a coordinate neighborhood (open subset of the plane). These suffice to specify the lengths of tangent vectors, the angle between two non-zero tangent vectors at a point, the arc length of a piecewise-smooth path, and the area of a region of surface. These functions and their derivatives also suffice to define <em>geodesics</em>, paths in the surface of locally shortest length. Remarkably (Gauss's <em>Theorema Egregium</em>) these functions and their derivatives also detect whether or not a surface is locally isometric to the Euclidean plane in the sense that small geodesic triangles have total interior angle <span class="math-container">$\pi$</span>. Our ability to define and measure these quantities using functions of two variables defined in a coordinate neighborhood is what do Carmo means by "without referring back to the ambient space." Analogously, general relativity allows us to conceptualize and work with the geometry of spacetime without imagining our universe embedded in a higher-dimensional space.</p>
<p>By contrast, a parametrization, particularly an embedding of a surface in Euclidean three-space, also induces components <span class="math-container">$e$</span>, <span class="math-container">$f$</span>, <span class="math-container">$g$</span> (or <span class="math-container">$\ell$</span>, <span class="math-container">$m$</span>, <span class="math-container">$n$</span> depending on the author) of a <em>second fundamental form</em>. Loosely, these functions measure how the surface bends in the ambient space, or more precisely, how a continuous unit normal field varies in coordinates. Because a unit normal field "does not lie in the surface," geometers think of the second fundamental form as "extrinsic" geometry that <em>does</em> refer to the ambient space.</p>
|
1,877,567 |
<p>I need help calculating two integrals</p>
<p>1)
$$\int_1^2 \sqrt{4+ \frac{1}{x}}\mathrm{d}x$$
2)
$$\int_0^{\frac{\pi}{2}}x^n sin(x)\mathrm{d}x$$</p>
<p>So I think on the 1st one I will have to use substitution, but I don't know what to do to get something similar to something that's in the known basic integrals.
2nd one I think is per partes but the $n$ is confusing.</p>
<p>Any help would be appreciated.
Thank you in advance.
(If there happens to be any duplicates I am apologizing also, but didn't find any.)</p>
|
Community
| -1 |
<p>You can indeed work out the second integral by parts.</p>
<p>$$I_n:=\int_0^{\pi/2}x^n\sin x\,dx=-\left.x^n\cos x\right|_0^{\pi/2}+n\int_0^{\pi/2}x^{n-1}\cos x\,dx.$$</p>
<p>Repeat with the cosine integral,</p>
<p>$$J_n:=\int_0^{\pi/2}x^n\cos x\,dx=\left.x^n\sin x\right|_0^{\pi/2}-n\int_0^{\pi/2}x^{n-1}\sin x\,dx.$$</p>
<p>This gives you the recurrence relations</p>
<p>$$I_n=nJ_{n-1},\\J_n=\left(\frac\pi2\right)^n-nI_{n-1},$$</p>
<p>so that</p>
<p>$$I_n=n\left(\left(\frac\pi2\right)^{n-1}-(n-1)I_{n-2}\right).$$</p>
<p>When you decrease $n$, you will eventually reach $n=1$ or $n=0$, you need to explictly compute $I_1$ and $I_0$. It is an easy matter to establish</p>
<p>$$I_0=J_0=1,$$ then $$I_1=1.$$</p>
<hr>
<p>To solve the recurrence, you can rewrite</p>
<p>$$\frac{I_n}{n!}=\frac1{(n-1)!}\left(\frac\pi2\right)^{n-1}-\frac{I_{n-2}}{(n-2)!}.$$</p>
<p>Then</p>
<p>$$\frac{I_{n-2}}{(n-2)!}=\frac1{(n-3)!}\left(\frac\pi2\right)^{n-3}-\frac{I_{n-4}}{(n-4)!}$$ and summming,</p>
<p>$$\frac{I_n}{n!}=\frac1{(n-1)!}\left(\frac\pi2\right)^{n-1}-\frac1{(n-3)!}\left(\frac\pi2\right)^{n-3}+\frac{I_{n-4}}{(n-4)!}$$ and finally</p>
<p>$$\frac{I_n}{n!}=\frac{I_{m}}{m!}+\sum_{k=m}^{n}\left(\frac1{(k-1)!}\left(\frac\pi2\right)^{k-1}-\frac1{(k-3)!}\left(\frac\pi2\right)^{k-3}\right)$$</p>
<p>where $m=n\bmod4$ and the summation must be performed with a step of $4$. You will recognize the truncated development of $\sin\pi/2$ or $\cos\pi/2$, depending on the parity of $m$.</p>
|
1,877,567 |
<p>I need help calculating two integrals</p>
<p>1)
$$\int_1^2 \sqrt{4+ \frac{1}{x}}\mathrm{d}x$$
2)
$$\int_0^{\frac{\pi}{2}}x^n sin(x)\mathrm{d}x$$</p>
<p>So I think on the 1st one I will have to use substitution, but I don't know what to do to get something similar to something that's in the known basic integrals.
2nd one I think is per partes but the $n$ is confusing.</p>
<p>Any help would be appreciated.
Thank you in advance.
(If there happens to be any duplicates I am apologizing also, but didn't find any.)</p>
|
Brevan Ellefsen
| 269,764 |
<p>Just as an interesting fact, the general antiderivative for the second integral can be written as
$$\int x^n \sin(x) \, \mathrm{d}x = \frac 12 i x^{n+1} (\operatorname{E_{-n}} (-i x)-\operatorname{E_{-n}} (i x))+C$$<br>
This can be proven directly through substitutions. Write the answer in terms of integrals, combine, and substitute to get back to the original integral</p>
|
2,388,738 |
<blockquote>
<p>I'm messing around with doing a visualization that has nothing to do with the primes and in order to execute it correctly I need an ordered list of all point in the order that the Ulam Spiral crosses them. I've tried some of my work but have only run in to abundantly complicated paths to solution. Also is there a name for looking for pattern that generally occur in spiral, whether they are related to primes or not?</p>
</blockquote>
<p>E.g.: Starting with this image:</p>
<p><a href="https://i.stack.imgur.com/90A3p.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/90A3p.png" alt="enter image description here"></a></p>
<p>if $1$ is at the origin, then the list would be as follows: $$(0,0),(1,0),(1,1),(0,1),(-1,1),(-1,0),(-1,-1),(0,-1),(1,-1),(2,-1),(2,0),...$$</p>
|
iadvd
| 189,215 |
<p>You are lucky because it seems that a very similar pair of sequences is already at OEIS. </p>
<p>The $x$-coordinates are sequence <a href="https://oeis.org/A174344" rel="nofollow noreferrer">A174344</a> ("List of $x$-coordinates of point moving in clockwise spiral") and $y$-coordinates ("List of $y$-coordinates of point moving in clockwise spiral") are sequence <a href="https://oeis.org/A268038" rel="nofollow noreferrer">A268038</a>.</p>
<p>List of $x$-coordinates of point moving in clockwise spiral. </p>
<blockquote>
<p>$$0, 1, 1, 0, -1, -1, -1, 0, 1, 2, 2, 2, 2, 1, 0, -1, -2, -2, -2, -2, -2, -1, 0, 1, 2, 3, 3, 3, 3, 3, 3, 2, 1, 0, -1, -2, -3, -3, -3, -3, -3, -3, -3, -2, -1, 0, 1, 2, 3, 4, 4, 4, 4, 4, 4, 4, 4, 3, 2, 1, 0, -1, -2, -3, -4, -4, -4, -4, -4, -4, -4, -4, -4, -3, -2, \cdots$$</p>
</blockquote>
<p>List of $y$-coordinates of point moving in clockwise spiral. </p>
<blockquote>
<p>$$0, 0, -1, -1, -1, 0, 1, 1, 1, 1, 0, -1, -2, -2, -2, -2, -2, -1, 0, 1, 2, 2, 2, 2, 2, 2, 1, 0, -1, -2, -3, -3, -3, -3, -3, -3, -3, -2, -1, 0, 1, 2, 3, 3, 3, 3, 3, 3, 3, 3, 2, 1, 0, -1, -2, -3, -4, -4, -4, -4, -4, -4, -4, -4, -4, -3, -2, -1, 0, 1, 2, 3, 4, 4, 4, \cdots$$</p>
</blockquote>
<p>In the referred OEIS pages there are formulas to calculate each sequence of coordinates.</p>
|
5,612 |
<p>This is driving me nuts: I'm trying to control the parameters for a relatively large system of ODEs using Manipulate.</p>
<pre><code>With[{todo =
Module[
{sol, ode, timedur = 40},
ode = Evaluate[odes /. removeboundaries /. moieties];
sol = NDSolve[Join[ode, init], vars, {t, 0, timedur}];
Plot[Evaluate[c[1][t] /. parms] /. sol, {t, 0, timedur}]
],
controls =
Sequence @@
Table[{{parms[[i]][[1]], parms[[i]][[2]]},
0, (3*parms[[i]][[2]])}, {i, 1, Length[parms]}]},
Manipulate[todo, controls, ContinuousAction -> False,
ControlPlacement -> Bottom]]
</code></pre>
<p>The solve step, in which sol is created, performs successfully. When trying to make a Plot from this however, I get all kinds of errors like </p>
<pre><code>NDSolve::ndnum: Encountered non-numerical value for a derivative at t == 0.`
</code></pre>
<p>Although NDSolve runs fine when Plot is commented out! Note that this code relies on global variables defined elsewhere in the script. I should also add that the code inside the Module[] works when copied to a fresh cell. </p>
<p>Could someone help me out? </p>
<p>Thanks!!</p>
|
Verbeia
| 8 |
<p>Even without a minimual example, it is clear that you have a problem related to your use of <code>With</code> as the outer scoping construct. Please see the <a href="https://mathematica.stackexchange.com/q/559/8">answers to this question</a>, particularly <a href="https://mathematica.stackexchange.com/a/562/8">mine</a>.</p>
<blockquote>
<ul>
<li>Use <code>With</code> for local constants that you don't have to change subsequently.</li>
<li>Use <code>Module</code> for local variables that are local to that piece of code.</li>
<li>Use <code>Block</code> for local variables that are local to that sequence of evaluation.</li>
</ul>
</blockquote>
<p>You are defining <code>todo</code> to be a <em>constant</em>, and then trying to <code>Manipulate</code> it. </p>
<p>In addition, the <code>Module</code> outputs the graphic from the <code>Plot</code> as its output. It will be a <code>Graphics[]</code> object, which is not amenable to the kinds of parameter adjustment you seem to want. </p>
<p>My suggestion would be something along the lines of:</p>
<pre><code>Module[{sol, ode, timedur = 40, controls},
ode = Evaluate[odes /. removeboundaries /. moieties];
sol = NDSolve[Join[ode, init], vars, {t, 0, timedur}];
controls = Sequence @@
Table[{{parms[[i]][[1]], parms[[i]][[2]]},
0, (3*parms[[i]][[2]])}, {i, 1, Length[parms], 1}];
Manipulate[Plot[Evaluate[c[1][t] /. parms] /. sol, {t, 0, timedur}] ,
controls, ContinuousAction -> False, ControlPlacement -> Bottom]]
</code></pre>
|
373,068 |
<p>For a real number $a$ and a positive integer $k$, denote by $(a)^{(k)}$ the number $a(a+1)\cdots (a+k-1)$ and $(a)_k$ the number
$a(a-1)\cdots (a-k+1)$. Let $m$ be a positive integer $\ge k$. Can anyone show me, or point me to a reference, why the number
$$ \frac{(m)^{(k)}(m)_k}{(1/2)^{(k)} k!}= \frac{2^{2k}(m)^{(k)}(m)_k}{(2k)!}$$ is always an integer?</p>
|
TCL
| 3,249 |
<p>\begin{eqnarray*}& &\frac{2^{2k}(m)^{(k)}(m)_k}{(2k)!}\\
&=&\frac{2^{2k}(m-k+1)(m-k+2)\cdots (m-1)(m)(m)(m+1)\cdots (m+k-2)(m+k-1)}{(2k)!}
\end{eqnarray*}
Now we write one of the $m$ as $\frac{1}{2}[(m-k)+(m+k)]$ and distribute, and the last expression becomes
$$2^{2k-1}\left[\frac{(m+k-1)(m+k-2)\cdots(m-k) }{(2k)!}+\frac{(m+k)(m+k-1)\cdots (m-k+1)}{(2k)!}\right]$$
which is equal to
$$2^{2k-1}\left[{m+k-1\choose 2k}+{m+k\choose 2k}\right],$$ an integer.</p>
|
1,824,280 |
<p>The question is from one of the past exams in a course I am doing. I have gotten halfway through it but cannot figure out how to finish it off.</p>
<p>So the first part was to prove that $4 \mid n^2 - 5 $ if $n$ is an odd integer. </p>
<p>Here is a brief proof (without intricate details):<br>
Consider $n = 2k+1$<br>
$n^2 - 5 = 4(k^2 + k -1)$<br>
$\therefore 4 \mid n^2 - 5$</p>
<p>Now I think I can prove the second part by showing $k^2 + k - 1 \neq 2 $ if $k$ is an integer.<br>
Hence rearranging I have $k^2 + k - 3 = 0$, which gives two non-integer roots. </p>
<p>However this course has not used the quadratic formula explicitly so I am wondering if there is a simpler way.</p>
|
Tacet
| 186,012 |
<p>$$n^2 - 5 \equiv_8 0 \Leftrightarrow n^2 \equiv_8 5$$</p>
<p>So, $n$ is such number, that square has remainder $5$, when divided by $8$.
Could you show that such number doesn't exist?</p>
<p><strong>Hint</strong>: If you have no better idea, you can check just numbers from set $\lbrace 0, 1, \dots, m-1\rbrace$, when $m$ is divisor (here $8$).</p>
<p><strong>Def</strong>:</p>
<p>$$ a \equiv_m b \Longleftrightarrow m \mid (a - b)$$</p>
|
1,824,280 |
<p>The question is from one of the past exams in a course I am doing. I have gotten halfway through it but cannot figure out how to finish it off.</p>
<p>So the first part was to prove that $4 \mid n^2 - 5 $ if $n$ is an odd integer. </p>
<p>Here is a brief proof (without intricate details):<br>
Consider $n = 2k+1$<br>
$n^2 - 5 = 4(k^2 + k -1)$<br>
$\therefore 4 \mid n^2 - 5$</p>
<p>Now I think I can prove the second part by showing $k^2 + k - 1 \neq 2 $ if $k$ is an integer.<br>
Hence rearranging I have $k^2 + k - 3 = 0$, which gives two non-integer roots. </p>
<p>However this course has not used the quadratic formula explicitly so I am wondering if there is a simpler way.</p>
|
Behrouz Maleki
| 343,616 |
<p>If $n=2k$ then $n^2-5=8q-5$ or $n=8q-1$</p>
<p>If $n=2k+1 \,$ then $n^2-5=8q-4$ </p>
|
1,824,280 |
<p>The question is from one of the past exams in a course I am doing. I have gotten halfway through it but cannot figure out how to finish it off.</p>
<p>So the first part was to prove that $4 \mid n^2 - 5 $ if $n$ is an odd integer. </p>
<p>Here is a brief proof (without intricate details):<br>
Consider $n = 2k+1$<br>
$n^2 - 5 = 4(k^2 + k -1)$<br>
$\therefore 4 \mid n^2 - 5$</p>
<p>Now I think I can prove the second part by showing $k^2 + k - 1 \neq 2 $ if $k$ is an integer.<br>
Hence rearranging I have $k^2 + k - 3 = 0$, which gives two non-integer roots. </p>
<p>However this course has not used the quadratic formula explicitly so I am wondering if there is a simpler way.</p>
|
Community
| -1 |
<p>If $n=2k+1$ then you have shown that $n^2-5=4(k^2+k-1)$. If $k$ is even then $k^2+k-1$ is an even plus an even minus an odd, therefore odd, and if $k$ is odd then $k^2+k-1$ is an odd plus an odd minus an odd, therefore odd. So $2\nmid k^2+k-1$ therefore $8\nmid n^2-5=4(k^2+k-1)$.</p>
<p>If $n$ is even then obviously $n^2-5$ is odd.</p>
|
187,395 |
<p>I can't find my dumb mistake.</p>
<p>I'm figuring the definite integral from first principles of $2x+3$ with limits $x=1$ to $x=4$. No big deal! But for some reason I can't find where my arithmetic went screwy. (Maybe because it's 2:46am @_@).</p>
<p>so </p>
<p>$\delta x=\frac{3}{n}$ and $x_i^*=\frac{3i}{n}$</p>
<p>where $x_i^*$ is the right end point of each rectangle under the curve.</p>
<p>So the sum of the areas of the $n$ rectangles is</p>
<p>$\Sigma_{i=1}^n f(\delta xi)\delta x$</p>
<p>$=\Sigma_{i=1}^n f(\frac{3i}{n})\frac{3}{n}$</p>
<p>$=\Sigma_{i=1}^n (2(\frac{3i}{n})+3)\frac{3}{n}$</p>
<p>$=\frac{3}{n}\Sigma_{i=1}^n (2(\frac{3i}{n})+3)$</p>
<p>$=\frac{3}{n}\Sigma_{i=1}^n ((\frac{6i}{n})+3)$</p>
<p>$=\frac{3}{n} (\frac{6}{n}\Sigma_{i=1}^ni+ 3\Sigma_{i=1}^n1)$</p>
<p>$=\frac{3}{n} (\frac{6}{n}\frac{n(n+1)}{2}+ 3n)$</p>
<p>$=\frac{18}{n}\frac{(n+1)}{2}+ 9$</p>
<p>$=\frac{9(n+1)}{n}+ 9$</p>
<p>$\lim_{n\to\infty} \frac{9(n+1)}{n}+ 9 = 18$</p>
<p>But the correct answer is 24. </p>
|
André Nicolas
| 6,312 |
<p>You want
$$f\left(1+\frac{3i}{n}\right).$$</p>
<p>The $+3$ in the third line (and later) will change to $+5$.</p>
|
139,934 |
<p>Suppose I want to solve an equation for the matrix elements of $\bar{W}$:
$$\alpha W_{ba}+\beta W_{bb}=x; \alpha W_{aa}+\beta W_{ab}=y$$</p>
<p>Using the syntax <code>Subscript[W, ij]</code> for my matrix element (on the $i$th row and $j$ th column), I get the following message:</p>
<p>Set::write: Tag Times in 2 x is Protected.</p>
<p>Is is possible at all to write such double subscript in Mathematica?</p>
|
Nasser
| 70 |
<p>Using user9444 method to read text files with header. </p>
<pre><code>SetDirectory[NotebookDirectory[]]
data=Cases[Import["t.txt","Table"],{_?NumberQ,___}];
data
</code></pre>
<p><img src="https://i.stack.imgur.com/6zBEn.png" alt="Mathematica graphics"></p>
<pre><code>MatrixForm[data]
</code></pre>
<p><img src="https://i.stack.imgur.com/2jZ2H.png" alt="Mathematica graphics"></p>
<p>See <a href="https://mathematica.stackexchange.com/questions/50718/skip-header-lines-on-import">skip-header-lines-on-import</a> for other examples of how this can be done. </p>
<p>Matlab is a little easier in this sort of thing. It has <a href="https://www.mathworks.com/help/matlab/ref/textscan.html" rel="nofollow noreferrer">textscan()</a> function, which one can tell it how many lines to skip (for headers). I do not know why Mathematica does not have such functionality. The above method can fail, if header line happens to start with number as well. But for your file, it works.</p>
|
2,972,085 |
<p><a href="https://i.stack.imgur.com/pcOfx.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/pcOfx.jpg" alt="enter image description here"></a></p>
<p>My friend show me the diagram above , and ask me </p>
<p>"What is the area of a BLACK circle with radius of 1 of BLUE circle?"</p>
<p>So, I solved it by algebraic method.
<span class="math-container">$$$$</span></p>
<p>Let center of <span class="math-container">$\color{black}{BLACK}$</span> circle be <span class="math-container">$(0,0)$</span>.</p>
<p>We can set, </p>
<p><span class="math-container">$x^2 + (y-R)^2 = R^2$</span> , where <span class="math-container">$R$</span> means radius of <span class="math-container">$\color{red}{RED}$</span> circle.</p>
<p><span class="math-container">$(x-p)^2 + (y-r)^2 = r^2 $</span>, where <span class="math-container">$(p,r)$</span> means center of <span class="math-container">$\color{blue}{BLUE}$</span> circle.
<span class="math-container">$$$$</span>
These can imply</p>
<p><span class="math-container">$ 2R=r+ \sqrt{p^2 + r^2}$</span></p>
<p><span class="math-container">$p^2 + (R-r)^2 = (R+r)^2 $</span></p>
<p>So, </p>
<p><span class="math-container">$ 2r=R$</span></p>
<p><span class="math-container">$$$$</span></p>
<p>But he wants not algebraic but <strong>Geometrical Method.</strong></p>
<p>How can I show <span class="math-container">$ 2r=R$</span> with <strong>Geometrical Method</strong>?</p>
<p>Really thank you.</p>
<p><span class="math-container">$$$$</span></p>
<p>(Actually I constructed the diagram with algebraic methed, </p>
<p>but I'd like to know how construct this whit Geometrical method.)</p>
|
g.kov
| 122,782 |
<p><a href="https://i.stack.imgur.com/6NWas.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6NWas.png" alt="enter image description here"></a></p>
<p><span class="math-container">\begin{align}
\triangle ADB:\quad
|DB|^2&=
(\tfrac{R}2+r)^2
-(\tfrac{R}2-r)^2
=2rR
,\\
\triangle BDO:\quad
|DB|^2&=
(R-r)^2-r^2
=R(R-2r)
.
\end{align}</span><br>
Hence,</p>
<p><span class="math-container">\begin{align}
R(R-2r)&=2rR
,\\
R&=4r
.
\end{align}</span> </p>
|
1,828,042 |
<p>This is my first question on this site, and this question may sound disturbing. My apologies, but I truly need some advice on this.</p>
<p>I am a sophomore math major at a fairly good math department (top 20 in the U.S.), and after taking some upper-level math courses (second courses in abstract algebra and real analysis, differential geometry, etc), I can say that I genuinely like math, and if I have A BIT chance to succeed, I will go to graduate school and choose math research as my career.</p>
<p>However, this is exactly the thing that I am afraid of. My grades on the courses are mediocre (my GPA for math courses is around 3.7), and for the courses I got A's, I had to work very hard, much harder than others to get the same result, and I often get confused in many of the classes, while the others understand the material quickly and could answer professor's questions, and at the same time I didn't even understand what the professor was really asking. I really wonder, if I have to work hard even on undergraduate courses, does that mean I am not naturally smart enough for more advanced math, especially compared to everyone else in my class? Can I even survive graduate level math if I even sometimes struggle with undergraduate courses? I always believe that adequate mathematicians could do well in their undergraduate courses easily. In my case, even if I work very hard, I forget definitions/theorems easily and then of course forget how to use them to solve problems.</p>
<p>Is it still worth to try if I am significantly behind the regular level and have to work hard even for undergraduate courses, providing that there are a lot of smart people who can understand them instantly. This feeling hurts me a lot, especially when I am struggling with something in math, I always feel I am a useless trash and ask myself why I am so stupid?</p>
<p>I thought about talking to my professors about this issue, but I find this too embarrassing to start. I am really afraid that if I ask them this question, they may tell me the truth in person that "you are really not smart enough to go to graduate school".</p>
<p>So how can I tell if it is still worth for me to think about this path, or I should realize that I have no chance to succeed and give up now? I appreciate encouraging comments, but please, please be honest on this case because it is really important for my future plan. Thanks again for your advice, and I am really grateful.</p>
|
Landon Carter
| 136,523 |
<p>Often what happens in mathematics, from my personal experience, is that the one with more exposure generally gets the upper hand. What you are talking about here is talent, which you feel you lack. Talent plays a role till a certain extent, but then I have seen talented people too struggling with high level mathematics (algebraic geometry, to be precise).</p>
<p>So here is what you should do. You have realized that your performance in class is a bit mediocre as compared to others, which is a good thing. Now, it is time to better your prospects as a math student.</p>
<ol>
<li><strong>Read in advance before the class</strong>. You do not know how important this is. Try to get a hang of what is going to be taught. Get a textbook, read the things before you come to class. Understand what is going on. Go through the examples. Solve exercises. This is probably the most fundamental important thing students miss.</li>
<li>After doing 1., when you come to class, you will get a better understanding of what your professor is going to teach. You too will be able to answer his questions, mark my words. If you cannot think of an answer immediately, write the problem down, think over it and at the end of the day, if you are unsuccessful, go to your professor. Tell him what you tried, and why they failed. Don't just go with a problem and say "I couldn't do anything with this."</li>
<li>Thing is, the very good mathematicians I have seen, are extremely well-read. Read a lot. Spend time with maths. Understand how topology is related to geometry. These interconnections between math topics is very crucial.</li>
<li><strong>Discuss</strong>. Your discussions may happen with your own professor, or with your fellow students. Do you know, I never really understood what is the use of Measure Theory. Why the hell would someone make easy things so dry? And then I started talking to my professor. I told him to teach me. I believe the long evening hours the two of us discussed in his office have been crucial to my understanding of mathematics.</li>
<li>I would advise you to do the above things for a sufficiently long time, before you decide to switch to a career other than math. Believe me, math is fun. You love it, and you are already in the minority. All you have to do now is sharpen your skills.</li>
<li>Research thinking is something you should develop. You can read some accessible papers on advice of your professor. The method to read an easy paper is to try to prove the lemmas yourself, after probably skimming through the main ideas. Don't read word by word. Can you give an alternate proof to some lemma? Can that topic be generalized? Talk to your professor, again!</li>
<li><strong>Think, read and study!</strong> Do this for the next few months and then comment on your progress again.</li>
</ol>
|
24,195 |
<p>I am looking at a von Neumann algebra constructed from a discrete group and a 2-cocylce.
Does someone know some good references (article, book)? It would be very helpful for me.
To be more precise, consider a countable group $G$ and a 2-cocycle $\phi :G^2\rightarrow S^1$ where $S^1$ is the group of complex number of modulus 1.
You get a representation $\pi$ of the group $G$ in the Hilbert space $l^2(G)$ defined as follow: $$\pi(g)(e_t)=\phi(g,t).e_{gt}$$, where $e_t$ is the canonical hilbert basis of $l^2(G)$.
I consider $L_\phi(G)$, the von Neumann algebra generated by $\pi(G)$.
I am looking for reference on those kinds of algebra.
Thanks,
Arnaud</p>
|
Wadim Zudilin
| 4,953 |
<p>After passions calmed down, I can put back my old unsuccessful attempt. At least I was quite enthusiastic at that time about the problem, until I have got downvotes and seen some vague ideas of others (they are still here) as answers.</p>
<p><em><strong>Post as it was on May 12, 2010.</em></strong>
Consider 3 circles of radii $r_1=r$, $r_2=2r$ and $r_3=3r$, where
$r=1/(2\sqrt{7\pi})$, so that their total area $s$ is
$$
s=\pi(r_1^2+r_2^2+r_3^2)=\frac12.
$$</p>
<p><a href="http://en.wikipedia.org/wiki/Descartes%27_theorem" rel="nofollow">Descartes' theorem</a>
asserts that if three circles of radii $r_1$, $r_2$ and $r_3$
are pairwise externally tangent to each other and circumscribed by
the circle of radius $R$, then
$$
\frac1R=2\sqrt{\frac1{r_1r_2}+\frac1{r_2r_3}+\frac1{r_3r_1}}
-\biggl(\frac1{r_1}+\frac1{r_2}+\frac1{r_3}\biggr).
$$
In our case we find from this formula that
$$
R=\frac3{\sqrt{7\pi}},
$$
so that the area $S$ of the circumscribing circle is
$$
S=\pi R^2=\frac97>1.
$$
This implies that the three given circles of total area $1/2$
cannot be put inside a circle of total area 1 without intersections.</p>
<p><strong>Edit.</strong> I have to agree that my geometric intuition is
too weak to notify an obvious non-applicability of Decartes' theorem.
As Tony mentions in his comment, one can fit these three circles in
a circle of radius $r_2+r_3$, and the circle of radius $r_1$ fits
in one of the gaps, without touching the enclosing circle.</p>
<p>Without trying to correct the above solution I indicate another
choice: $r_1=0.99q$, $r_2=r_3=2q$, where $q$ is chosen in such a
way that the total area is again $1/2$. Decartes' theorem produces
the circumscribing circle of area $>1.008$. There is still an option
to put two large circles along a diameter of circle of total area 1
and see whether there is a room for the smaller one. This definitely
means that more geometry is involved...</p>
<p><strong>Final edit</strong> (hopefully).
As Roland mentions, the three circles again fit a circle
of radius $r_2+r_3$. After the two large equal circles are inscribed,
there is a room for another circle of radius $2r_2/3$ (well, one
can still have the space for the other of radius $2r_3/3$, but
I do not care of more than 3 circles inscribed). The corresponding
geometric picture involves the right triangles with sides $1$, $4/3$,
and $5/3$, a nice appearance of the Pythagorean triple $3^2+4^2=5^2$.</p>
<p>Is it true that this ($r_1=2/3$, $r_2=r_3=1$, total area $s=22\pi/9$)
is the worth case of inscribing 3 circles into the circle of
radius $R=2$ (area $S=4\pi$)? The quality of this inscription is
$S/s=18/11$. In other words, <em>can we replace areas $1/2$ and $1$
in the original problem by $11/18$ and $1$ respectively, if at least
three circles are inscribed</em>?</p>
<p>I have to apologize for my unsuccessful attempt. You still have
a chance to enjoy the beauty and difficulty of the original problem.
I thank Tony and Roland for pointing out my mistakes in geometry.</p>
|
24,195 |
<p>I am looking at a von Neumann algebra constructed from a discrete group and a 2-cocylce.
Does someone know some good references (article, book)? It would be very helpful for me.
To be more precise, consider a countable group $G$ and a 2-cocycle $\phi :G^2\rightarrow S^1$ where $S^1$ is the group of complex number of modulus 1.
You get a representation $\pi$ of the group $G$ in the Hilbert space $l^2(G)$ defined as follow: $$\pi(g)(e_t)=\phi(g,t).e_{gt}$$, where $e_t$ is the canonical hilbert basis of $l^2(G)$.
I consider $L_\phi(G)$, the von Neumann algebra generated by $\pi(G)$.
I am looking for reference on those kinds of algebra.
Thanks,
Arnaud</p>
|
Victor Protsak
| 5,740 |
<p><em> I have rewritten the post so that the proof is correct. </em></p>
<p>This problem is a bit hard, but following the Polya dictum, here is the answer to an apparently easier one: yes, if circles are replaced with parallel squares, and moreover, a suitable version of the greedy algorithm works. </p>
<p><b>Theorem.</b> <em>Let R be an $a$ by $b$ rectangle with $b\leq a\leq 2b.$ Any set of squares of total area at most $ab/2$ can be packed into R.</em> (All rectangles and squares are assumed to be oriented parallel to the coordinate axes throughout.)</p>
<p><b>Corollary 1.</b> <em>A finite set of squares of total area 1 can be packed into a square of area 2.</em> </p>
<p><b>Corollary 2.</b> <em>A finite set of circles of total area 1 can be packed into a circle of area 4.</em></p>
<p><em>Proof of Corollary 2.</em> Replace each circle of diameter $d$ with a square of size $d$. Pack the squares into a square of double their total area, then circumscribe a circle C around it. The inscribed circles of the small squares have given radii, disjoint interiors, and the area of C is four times the total area of the given circles. $\square$ </p>
<p><em>Proof of Theorem.</em> The proof is by induction on the number of squares and uses the Packing Lemma below. Suppose that the largest square has size $c.$ Since $c^2\leq ab/2\leq b^2,$ we get $c\leq b.$ Split the rectangle R into two rectangles R' and R'' of the same height $b$ and of widths $c$ and $a-c$. </p>
<p>Case 1. If $c\leq a-b/2$ then the dimensions of the right rectangle R'' satisfy the conditions of the theorem, because $b/2\leq a-c\leq 2b.$ By the Packing Lemma, at least half the area of the left rectangle R' can be packed, starting with the $c$ square. The remaining squares are fewer in number and constitute at most half the area of R''. By the inductive assumption, they can be packed into R''.</p>
<p>Case 2. If $c\geq a-b/2$ then $b\geq 2(a-c)$ and R'' contains a subrectangle R''' of height $2(a-c)$ and width $a-c$, to which the theorem applies. Pack the $c$ square into R'. Since $c^2+(a-c)^2\geq a^2/2\geq ab/2,$ the remaining squares have total area at most $(a-c)^2$ and can be packed into R''' by the inductive assumption. $\square$</p>
<p><b>Packing Lemma.</b> <em>Let R be a $c$ by $b$ rectangle, $c\leq b$, and F be a finite set of squares with the total area at least half the area of R and the largest square of size $c$. Then a subset of F containing the $c$ square can be packed into R so that it covers at least half the area of R. </em></p>
<p><em>Proof. </em> Induction in the number of squares in F. Cut the rectangle R into a sequence of horizontal strips of width $c$, starting with the $c$ square at the top. The height of each subsequent strip is the size of the largest unused square, which is placed at the left end. By the inductive assumption, at least half of the strip, area-wise, can be packed with squares from F. Continue the process until the height of the remaining part of R ("the bottom strip") becomes less than $c$, and hence its area less than $c^2.$ At this point at least half of the area of the intermediate strips has been covered, as well as the larger of the areas of the top and the bottom strip, so that at least half of R has been covered. $\square$ </p>
<p><b>Comment:</b> In the initial post, I said that the Packing Lemma easily implied the result for packing the square: order the squares by size and successively apply the lemma to pack vertical columns whose widths are determined by the largest square not yet packed, starting with the largest. While that argument had virtues of simplicity and presentability, it had an unfortunate drawback of being invalid. The columns are getting skinnier, so it may well happen that passing to the next column, the largest remaining square is too wide to fit, even though its area is a tiny proportion of the remaining area. Specific example: if one attempts to pack the squares of sizes 1/2, 1/3, 1/6+$\epsilon$, 1/6+$\epsilon$, 1/6 into the square of size $2\sqrt{2}/3+\epsilon<1$ following the naive algorithm then the first column has width 1/2 and contains the 1/2 square, the second column has width 1/3 and contains the next 3 squares (covering at least half the area in both cases), which leaves a narrow vertical strip of width less than 1/6 that cannot accommodate the remaining 1/6 square. By controlling the distortion (the ratio of width and height), we get a more natural result.</p>
|
2,165,296 |
<p>Can every separable Banach space be isometrically embedded in $l^2$ ? Or at least in $l^p$ for some $1\le p<\infty$ ? </p>
<p>I only know that any separable Banach space is isometrically isomorphic to a linear subspace of $l^{\infty}$.</p>
<p>Please help . Thanks in advance </p>
|
Martín-Blas Pérez Pinilla
| 98,199 |
<p>Writing f(t) as power instead of as quotient it is easier to calculate the successive derivatives:
$$f(t) = (2t+1)^{-2}$$
$$f'(t) = (-2)(2t+1)^{-3}2$$
$$f''(t) = (-3)(-2)(2t+1)^{-3}2^2$$
$$f^{3}(t) = (-4)(-3)(-2)(2t+1)^{-3}2^3$$
$$\cdots$$
$$f^{n}(t) = \cdots$$
Can you continue?</p>
|
1,212,000 |
<p>I was trying to solve this square root problem, but I seem not to understand some basics. </p>
<p>Here is the problem.</p>
<p>$$\Bigg(\sqrt{\bigg(\sqrt{2} - \frac{3}{2}\bigg)^2} - \sqrt[3]{\bigg(1 - \sqrt{2}\bigg)^3}\Bigg)^2$$</p>
<p>The solution is as follows:</p>
<p>$$\Bigg(\sqrt{\bigg(\sqrt{2} - \frac{3}{2}\bigg)^2} - \sqrt[3]{\bigg(1 - \sqrt{2}\bigg)^3}\Bigg)^2 = \Bigg(\frac{3}{2} - \sqrt{2} - 1 + \sqrt{2}\Bigg)^2 = \bigg(\frac{1}{2}\bigg)^2 = \frac{1}{4}$$</p>
<p>Now, what I don't understand is how the left part of the problem becomes:
$$\frac{3}{2} - \sqrt{2}$$</p>
<p>Because I thought that $$\sqrt{\bigg(\sqrt{2} - \frac{3}{2}\bigg)^2}$$
equals to $$\bigg(\bigg(\sqrt{2} - \frac{3}{2}\bigg)^2\bigg)^{\frac{1}{2}}$$
Which becomes $$\sqrt{2} - \frac{3}{2}$$</p>
<p>But as you can see I'm wrong. </p>
<p>I think that there is a step involving absolute value that I oversee/don't understand.
So could you please explain by which property or rule of square root is this problem solved? </p>
<p>Thanks in advance</p>
|
marwalix
| 441 |
<p>A square root is always a non negative number so $\sqrt{x^2}=|x|$</p>
|
112,437 |
<p>I am working on a personal project involving a CloudDeploy[ ] that reads data off a Google Doc and then works with it. Ideally, the Google Doc is either a text document or a spreadsheet which contains a single string, which is what I want Mathematica to read as input.
<a href="https://docs.google.com/document/d/17m1JfjEbrna7e9INZv-FXZQ9yQJd8d1Uu2LFEPyT_ZI/edit?usp=sharing">Example here.</a></p>
<p>The kind of stuff I would like to do the the string is very simple, but I am stuck at the very beginning. This:</p>
<pre><code>string = Import[theURLstring];
</code></pre>
<p>obviously fails miserably.
Can someone help?</p>
<p><strong>More details</strong><br>
- I looked at <a href="https://mathematica.stackexchange.com/questions/23983/error-messages-in-importing-data-file/23988#23988">this past question</a>, but it didn't help me.<br>
- The reason I want to use Google Doc rather than a databin, say, is that I want my friends to add to the string without them to know basically any technical details. Basically I want to be as easy as: "open the doc, add to the document, save, close" on the input side, and "open this webpage to see the results" on the output side.<br>
- a document on dropbox would be OK, but only if there is no way of doing it on Google.<br>
- I also don't have much technical internet knowledge, so bear with me if I am asking the impossible or if this question is very simple.</p>
|
梁國淦
| 19,360 |
<p>The format of the direct download link is
<code>https://docs.google.com/document/d/<<file id>>/export?format=doc</code> or <code>format=txt</code> or <code>format=pdf</code>, etc.
Just write a small piece of function to replace the final part of the sharing URL to <code>export?format=xxx</code> and you get the <em>direct</em> (actually formatted) download link.
For example, define function</p>
<pre><code>googleDoc[gurl_, format_] :=
StringJoin@@Riffle[
ReplacePart[
StringSplit[gurl, "/" ],
-1 -> StringJoin["export?format=", format]
], "/"
]
</code></pre>
<p>Copy the share link of your document, and say</p>
<pre><code>URLDownload[ googleDoc[ "share link here", "doc" ] ]
</code></pre>
<p>or something like that.</p>
|
1,893,168 |
<p>$$\lim_{x\to 0} {\ln(\cos x)\over \sin^2x} = ?$$</p>
<p>I can solve this by using L'Hopital's rule but how would I do this without this?</p>
|
Patrick Stevens
| 259,262 |
<p>This way doesn't require fiddling with Taylor series or interchanging any sums and limits; it's an example of one of the many places where we can simplify things by recognising a derivative.</p>
<p>Substituting $u = \cos(x)$, we obtain $$\lim_{u \to 1} \frac{\log u}{1-u^2} = \lim_{u \to 1} \left[ \frac{\log u}{1-u} \times \frac{1}{1+u} \right]$$</p>
<p>Now, this is just $$\frac{1}{2} \times \lim_{u \to 1} \frac{\log(u)}{1-u} = -\frac{1}{2} \lim_{h \to 0} \frac{\log(1+h) - \log(1)}{h}$$
where we have substituted $h=-(1-u)$ and introduced the term $\log(1) = 0$ in the numerator.</p>
<p>That final limit is just $\frac{d}{dx} \log(x)$ evaluated at $x=1$; i.e. $1$.</p>
<hr>
<p>Strictly speaking, I suppose what we should do is find the limit when approached from above, and then the limit when approached from below, and show that they are the same. Otherwise the $u=\cos(x)$ substitution isn't obviously kosher. The calculations will be exactly the same.</p>
|
1,893,168 |
<p>$$\lim_{x\to 0} {\ln(\cos x)\over \sin^2x} = ?$$</p>
<p>I can solve this by using L'Hopital's rule but how would I do this without this?</p>
|
zhw.
| 228,045 |
<p>The expression equals</p>
<p>$$\frac{\ln (\cos x) - \ln (\cos 0)}{\cos x - \cos 0}\cdot\frac{\cos x - 1}{x^2}\cdot \frac{x^2}{\sin^2 x}.$$</p>
<p>The first fraction $\to \ln'(1) = 1,$ by definition of the derivative. The limit of the second fraction is standard and equals $-1/2.$ The third fraction $\to 1.$ So the limit is $-1/2.$ </p>
|
1,959,080 |
<p>A book claims that $9(9_9) = 9^{387420489}$.</p>
<p>I've never seen such an expression, and I've been unable to find anything about it on Google...</p>
<p>How is it supposed to be evaulated?</p>
<p>For reference, the name of the book is <code>Pasatiempos curiosos e instructivos</code> and this is the page where the expression appears (it's in Spanish but I can provide a translation if needed):</p>
<p><a href="https://i.stack.imgur.com/wgQfY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wgQfY.jpg" alt="Book photo"></a></p>
|
Brian M. Scott
| 12,042 |
<p>Since $9^9=387,420,489$, I assume that it’s a way of writing $9^{9^9}$. I don’t read Spanish, but that appears to be a discussion of attempts by Arab mathematicians to write large numbers using only three digits; if that’s the case, we’re looking at a historical special-purpose notation that didn’t survive.</p>
|
1,315,265 |
<p>Let $X=\mathcal{L}_2 [-1,1]$ and for any scalar $\alpha$ we define $E_\alpha=\{f\in \mathcal{L}: f \text{ continuous in } [-1,1] \text{ and } f(0)=\alpha \}$.</p>
<ol>
<li>Prove $E_\alpha$ is convex for any $\alpha$.</li>
<li>Prove $E_\alpha$ is dense in $\mathcal{L}_2$</li>
<li>Prove there is no $f\in X^*$ that separates $E_\alpha$ and $E_\beta$ for $\alpha \neq \beta$.</li>
</ol>
<p>Part 1 is easy because for $f,g\in E_\alpha$ we have $\gamma f(0) +(1-\gamma)g(0)=\gamma \alpha + (1-\gamma)\alpha =\alpha$.</p>
<p>I'm having trouble with 2 and 3. </p>
<p>I don't know how to approach 2: I can't use the density of step functions with value $\alpha$ at zero as they are not continuous, and I can't use Luzin's theorem in $[-1,1]$ because I need the function to be continuous everywhere in the interval. I also don't see how polynomial approximations help here. </p>
<p>How do I attack 2 and 3?
Edit: I was also hinted that in 3 i might want to describe $f(E_\alpha)$ for any $f\in X^*$, but I don't know how to follow through.</p>
<p>Thanks in advance.</p>
|
Giuseppe Negro
| 8,157 |
<p>HINT: Once you have a continuous $g$ that is a good approximation to the generic $f\in L^2(-1, 1)$, take a $\delta>0$ and do a linear fit between the points $(-1, \alpha)$ and $(-1+\delta, g(-1+\delta))$. That is, consider the function
$$
g_\delta(x)=\begin{cases}
\text{linear}, & x\in[-1, -1+\delta) \\
g(x), & x\in [-1+\delta, 1]
\end{cases}
$$
You can choose $\delta$ as small as you wish. See if you can make the distance $\|g-g_\delta\|_2$ sufficiently small.</p>
|
35,281 |
<p>I am looking for applications of category theory and homotopy theory in set theory and particularly in cardinal arithmetics. "Applications" in the broad sense of the word --- this would include theorems, definitions, questions, points of view (and papers) in set theory that could be motivated or understood with help of category theory and homotopy theory. I am aware of some applications of set theory in category theory, e.g. large cardinal axioms (Vopenka principle) are used to construct localisations in homotopy theory, but this is not what I am asking for. However, I would be interested to hear if Vopenka principle is equivalent to a statement in category or homotopy theory.</p>
<p>The reason for the question is that I am trying to better understand <a href="https://arxiv.org/abs/1006.4647" rel="nofollow noreferrer">this sketch</a> of an attempt to understand an invariant in PCF theory in terms of homotopy theory. I am most interested in applications to cardinal arithmetic.</p>
|
Andrej Bauer
| 1,176 |
<p>You could look at <a href="https://www.phil.cmu.edu/projects/ast/" rel="nofollow noreferrer">algebraic set theory</a>. For a general outline of how set theory, categories and type theory interact, see <a href="https://www.andrew.cmu.edu/user/awodey/" rel="nofollow noreferrer">Steve Awodey's</a> "<a href="https://www.andrew.cmu.edu/user/awodey/preprints/stcsFinal.pdf" rel="nofollow noreferrer">From sets, to types, to categories, to sets</a>". I don't know about direct connections between set theory and homotopy theory, but there is certainly a rich connection between type theory and homotopy theory, for example:</p>
<ul>
<li><a href="https://arxiv.org/abs/0906.4521v1" rel="nofollow noreferrer">https://arxiv.org/abs/0906.4521v1</a></li>
<li><a href="https://arxiv.org/abs/1007.4638" rel="nofollow noreferrer">https://arxiv.org/abs/1007.4638</a></li>
<li><a href="https://arxiv.org/abs/0803.4349" rel="nofollow noreferrer">https://arxiv.org/abs/0803.4349</a></li>
</ul>
<p>Also possibly relevant is:</p>
<ul>
<li><a href="https://arxiv.org/abs/0711.1529" rel="nofollow noreferrer">https://arxiv.org/abs/0711.1529</a></li>
</ul>
<p>Let me also mention that Vladimir Voevodsky has taken interest in connections between homotopy theory and foundations, see <a href="https://www.math.ias.edu/vladimir/" rel="nofollow noreferrer">his page</a>.</p>
<p>But I should say that all this (recent!) work is just laying the ground for what you seem to be asking for, namely further insights about set theory by means of category theory.</p>
|
35,281 |
<p>I am looking for applications of category theory and homotopy theory in set theory and particularly in cardinal arithmetics. "Applications" in the broad sense of the word --- this would include theorems, definitions, questions, points of view (and papers) in set theory that could be motivated or understood with help of category theory and homotopy theory. I am aware of some applications of set theory in category theory, e.g. large cardinal axioms (Vopenka principle) are used to construct localisations in homotopy theory, but this is not what I am asking for. However, I would be interested to hear if Vopenka principle is equivalent to a statement in category or homotopy theory.</p>
<p>The reason for the question is that I am trying to better understand <a href="https://arxiv.org/abs/1006.4647" rel="nofollow noreferrer">this sketch</a> of an attempt to understand an invariant in PCF theory in terms of homotopy theory. I am most interested in applications to cardinal arithmetic.</p>
|
Peter Arndt
| 733 |
<p>There is an interaction between category theory and set theory. In 1965, one year after Cohen's proof of the independence of the continuum hypothesis, Vopenka gave a proof using sheaf theory, see</p>
<ul>
<li>Kenneth Kunen, "[Omnibus Review]", The Journal of Symbolic Logic, <strong>34</strong> Issue 3 (1969) pp. 515 - 516, DOI: <a href="https://doi.org/10.2307/2270953" rel="nofollow noreferrer">https://doi.org/10.2307/2270953</a></li>
</ul>
<p>This is nowadays systematized in the topos theoretic interpretation of set theory, for which you should look up <a href="https://books.google.com/books?id=SGwwDerbEowC&printsec=frontcover&dq=MacLane+moerdijk&hl=en&ei=VSJjTOfFAsmiOIXdzf8J&sa=X&oi=book_result&ct=result&resnum=2&ved=0CDUQ6AEwAQ#v=onepage&q=MacLane%2520moerdijk&f=false" rel="nofollow noreferrer">MacLane/Moerdijk</a>, as Dylan Wilson pointed out. The authors give a proof of the independence of the continuum hypothesis.</p>
<p>Marta Bunge has given a topos theoretic proof of the independence of the Suslin hypothesis from ZFC in:</p>
<ul>
<li>Marta Bunge, <em>Topos Theory and Souslin's Hypothesis</em>. J.Pure & Applied Algebra 4 (1974) 159-187, <a href="https://doi.org/10.1016/0022-4049(74)90020-6" rel="nofollow noreferrer">https://doi.org/10.1016/0022-4049(74)90020-6</a>.</li>
</ul>
<p>As for the reformulation of Vopenka's principle: It is equivalent to the statement that a locally presentable category can not have a large full discrete subcategory. This, and more, is nicely explained in <a href="https://books.google.sk/books?id=iXh6rOd7of0C" rel="nofollow noreferrer">Adamek/Rosicky's Locally Presentable and Accessible Categories</a></p>
<p>On the other hand I know of no worked out connection between homotopy theory and set theory, just the indirect one via type theory mentioned by Andrej.</p>
|
2,541,997 |
<p>For what values of n can {1, 2, . . . , n} be partitioned into three subsets
with equal sums?</p>
<p>I noticed that somehow the sum from 1 to n hast to be a multiple of 3 and the common sum among these 3 subset is this sum divided by 3, but it's still not a convincing argument. How do you prove there exists 3 subsets of operands such that their sum evaluate to this number?</p>
<p>My solution made from other solutions</p>
<p>Theorem: The given set can be partitioned into three subsets with equals sums whem $\sum_{i = 1}^{n}i = \frac{1}{2} n(n+1) \equiv 0 \mod 3$ except when $n = 3$ , so $n \equiv 0,2 \mod 3$</p>
<p>Here are some cases which will be important to prove our theorem<br>
n = 5 $\{1,4\},\{2,3\},\{5\}$<br>
n = 6 $\{1,6\},\{2,5\},\{3,4\}$<br>
n = 8 $\{8,4\},\{7,5\},\{1,3,6\}$<br>
n =9 $\{9,6\}, \{8,7\}, \{1,2,3,4,5\}$</p>
<p>Also note that if $n \equiv 0,2\mod 3$ then $n \equiv 0,2,3,5 \mod 6 $ and ${5,6,8,9}$ represents this modularity, so we can start creating subsets with equal sums with the last 6 elements of the n elements. Once we are done with this 6 elements, we continue with the next 6 until we get to 5,6,8 or 9 and then we apply the base case.</p>
|
Ross Millikan
| 1,827 |
<p>The sum of all the numbers from $1$ to $n$ is $\frac 12n(n+1)$. As you say, we need this to be a multiple of $3$, which will be true when $n \equiv 0,2 \pmod 3$. We can't do $n=2$ or $n=3$, which we can prove by inspection. For $n=5$ there is $\{1,4\},\{2,3\},\{5\}$ and for $6$ we can do $\{1,6\},\{2,5\},\{3,4\}$. For larger $n$ it should be "obvious" that you have so much flexibility that it will be possible. To prove that, we can do $8$ and $9$ by hand, which I leave to you. Then you can do blocks of $6$ down from the top $\{k+1,k+6\},\{k+2,k+5\},\{k+3,k+4\}$ until you have $5,6,8,9$ left and use the solution we have for that. Now that you have groups of three sets with the same sum, take one out of each group to form one of the sets with sum $\frac 13 \cdot \frac 12n(n+1)$</p>
|
2,541,997 |
<p>For what values of n can {1, 2, . . . , n} be partitioned into three subsets
with equal sums?</p>
<p>I noticed that somehow the sum from 1 to n hast to be a multiple of 3 and the common sum among these 3 subset is this sum divided by 3, but it's still not a convincing argument. How do you prove there exists 3 subsets of operands such that their sum evaluate to this number?</p>
<p>My solution made from other solutions</p>
<p>Theorem: The given set can be partitioned into three subsets with equals sums whem $\sum_{i = 1}^{n}i = \frac{1}{2} n(n+1) \equiv 0 \mod 3$ except when $n = 3$ , so $n \equiv 0,2 \mod 3$</p>
<p>Here are some cases which will be important to prove our theorem<br>
n = 5 $\{1,4\},\{2,3\},\{5\}$<br>
n = 6 $\{1,6\},\{2,5\},\{3,4\}$<br>
n = 8 $\{8,4\},\{7,5\},\{1,3,6\}$<br>
n =9 $\{9,6\}, \{8,7\}, \{1,2,3,4,5\}$</p>
<p>Also note that if $n \equiv 0,2\mod 3$ then $n \equiv 0,2,3,5 \mod 6 $ and ${5,6,8,9}$ represents this modularity, so we can start creating subsets with equal sums with the last 6 elements of the n elements. Once we are done with this 6 elements, we continue with the next 6 until we get to 5,6,8 or 9 and then we apply the base case.</p>
|
Community
| -1 |
<p>If $n=6k$, make the following subsets:</p>
<p>$$1,4,\cdots,3k-2,3k+3,3k+6,\cdots,6k$$
$$2,5,\cdots,3k-1,3k+2,3k+5,\cdots,6k-1$$
$$3,6,\cdots,3k,3k+1,3k+4,\cdots,6k-2$$</p>
<p>If $n=6k+r$, where $r=5,8,9$, reduce to the previous case by first partitioning the set $\{1,2,\cdots,r\}$ and then partitioning the $6k$-element set $\{r+1,r+2,\cdots,n\}$ the same as above, except that each set is “translated upwards” by $r$ (all elements increased by $r$).</p>
<p>Example: partition $\{1,2,\cdots,11\}$: $r=5, k=1$ so we first partition $\{1,2,3,4,5\}$ into $\{1,4\},\{2,3\},\{5\}$, partition $\{1,2,3,4,5,6\}$ into $\{1,6\},\{2,5\},\{3,4\}$, “translate” the latter upwards by 5: $\{6,11\},\{7,10\},\{8,9\}$ and finally join them to get $\{1,4,6,11\},\{2,3,7,10\},\{5,8,9\}$.</p>
<p>The above construction works for any $n\ge 5$, $n\equiv 0$ or $n\equiv 2 \mod 3$. Cases $n=2$ and $n=3$ have no solutions, and neither have the cases $n\equiv 1 \mod 3$, because in those cases the total sum $\frac{n(n+1)}{2}$ is not divisible by $3$.</p>
|
3,248,552 |
<p>Imagine we want to use Theon's ladder to approximate <span class="math-container">$\sqrt{3}$</span>. The appropriate expressions are
<span class="math-container">$$x_n=x_{n-1}+y_{n-1}$$</span></p>
<p><span class="math-container">$$y_n=x_n+2x_{n-1}$$</span></p>
<p>Rungs 6 through 10 in the approximation of <span class="math-container">$\sqrt{3}$</span> are </p>
<p><span class="math-container">$\{\{208, 120\},\{568, 328\}, \{1552, 896\}, \{4240, 2448\}, \{11584, 6688\}\}$</span></p>
<p>a) Compute the two values in rung 11 of the ladder.</p>
<p>I'm assuming that all I need to do plug into the formula. So:</p>
<p><span class="math-container">$x_{11}=x_{10}+y_{10}$</span></p>
<p><span class="math-container">$x_{11}=6688+11584=18272$</span></p>
<p><span class="math-container">$y_{11}=x_{11}+2x_{10}$</span></p>
<p><span class="math-container">$y_{11}=18272+2(6688)=31648$</span></p>
<p>Is this correct?Part b is really what I am struggling with. </p>
<p>b) The figure below shows five rectangles whose dimensions correspond to rungs 6 through 10 above. That is, the lower left corner of each is at (0,0), while the upper right corners are at <span class="math-container">$(208,120),(568, 328),...,(11584,6688)$</span> Are any of these rectangles similar to each other? Explain, briefly, your reasoning. </p>
<p>All I can think is that 6688 and 120 have a gcd of 8, and the gcd of 11584 and 208 is 16. Not really sure how to articulate that this helps with the similarity of the rectangles. Thanks for the help</p>
<p><a href="https://i.stack.imgur.com/X0MOH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/X0MOH.png" alt="enter image description here"></a></p>
|
Cesareo
| 397,348 |
<p>Hint.</p>
<p>Calling <span class="math-container">$\lambda_n = \frac{y_n}{x_n}$</span> we have</p>
<p><span class="math-container">$$
\lambda_n = \frac{\lambda_{n-1}+3}{\lambda_{n-1}+1}
$$</span></p>
<p>giving a sequence <span class="math-container">$\lambda_n$</span> with limit at</p>
<p><span class="math-container">$$
\lambda = \frac{\lambda+3}{\lambda+1}\Rightarrow \lambda^2 = 3
$$</span></p>
|
3,278 |
<h3>What are Community Promotion Ads?</h3>
<p>Community Promotion Ads are community-vetted advertisements that will show up on the main site, in the right sidebar. The purpose of this question is the vetting process. Images of the advertisements are provided, and community voting will enable the advertisements to be shown.</p>
<h3>Why do we have Community Promotion Ads?</h3>
<p>This is a method for the community to control what gets promoted to visitors on the site. For example, you might promote the following things:</p>
<ul>
<li>the site's twitter account</li>
<li>useful tools or resources for the mathematically inclined</li>
<li>interesting articles or findings for the curious</li>
<li>cool events or conferences</li>
<li>anything else your community would genuinely be interested in</li>
</ul>
<p>The goal is for future visitors to find out about <em>the stuff your community deems important</em>. This also serves as a way to promote information and resources that are <em>relevant to your own community's interests</em>, both for those already in the community and those yet to join. </p>
<h3>How does it work?</h3>
<p>The answers you post to this question <em>must</em> conform to the following rules, or they will be ignored. </p>
<ol>
<li><p>All answers should be in the exact form of:</p>
<pre><code>[![Tagline to show on mouseover][1]][2]
[1]: http://image-url
[2]: http://clickthrough-url
</code></pre>
<p>Please <strong>do not add anything else to the body of the post</strong>. If you want to discuss something, do it in the comments.</p></li>
<li><p>The question must always be tagged with the magic <a href="/questions/tagged/community-ads" class="post-tag moderator-tag" title="show questions tagged 'community-ads'" rel="tag">community-ads</a> tag. In addition to enabling the functionality of the advertisements, this tag also pre-fills the answer form with the above required form.</p></li>
</ol>
<h3>Image requirements</h3>
<ul>
<li>The image that you create must be <strong>220 x 250 pixels</strong></li>
<li>Must be hosted through our standard image uploader (imgur)</li>
<li>Must be GIF or PNG</li>
<li>No animated GIFs</li>
<li>Absolute limit on file size of 150 KB</li>
</ul>
<h3>Score Threshold</h3>
<p>There is a <strong>minimum score threshold</strong> an answer must meet (currently <strong>6</strong>) before it will be shown on the main site.</p>
<p>You can check out the ads that have met the threshold with basic click stats <a href="http://meta.math.stackexchange.com/ads/display/3278">here</a>.</p>
|
kuch nahi
| 8,365 |
<p><a href="http://www.sagemath.org/" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7CiYR.png" alt="Sage Math"></a></p>
|
3,180,965 |
<p>Assume <span class="math-container">$f(x) \in L_1([a,b])$</span> and <span class="math-container">$x_0\in[a,b]$</span> is a point such that <span class="math-container">$f(x)\xrightarrow[x\to x_0]\ +\infty$</span>.</p>
<p>Is there always exists a function <span class="math-container">$g(x) \in L_1([a,b])$</span> such that <span class="math-container">$f(x)=o(g(x))$</span> where <span class="math-container">$x\to x_0$</span>?</p>
<p>In particular case, when <span class="math-container">$f(x) \in L_{1+\varepsilon}([a,b])$</span> for every <span class="math-container">$\varepsilon>0$</span> suitable function <span class="math-container">$g(x)$</span> exists. So, if counterexample exists, it's in <span class="math-container">$L_1$</span> and can't be in <span class="math-container">$L_{1+\varepsilon}$</span>.</p>
|
Amichai Lampert
| 663,306 |
<p>Hmmm... We can use the open mappimg theorem to conclude there must be an unbounded function h such that <span class="math-container">$h \cdot f \in L^1$</span> but I don't know how to guarantee the unbounded part is near <span class="math-container">$x_0$</span></p>
|
3,180,965 |
<p>Assume <span class="math-container">$f(x) \in L_1([a,b])$</span> and <span class="math-container">$x_0\in[a,b]$</span> is a point such that <span class="math-container">$f(x)\xrightarrow[x\to x_0]\ +\infty$</span>.</p>
<p>Is there always exists a function <span class="math-container">$g(x) \in L_1([a,b])$</span> such that <span class="math-container">$f(x)=o(g(x))$</span> where <span class="math-container">$x\to x_0$</span>?</p>
<p>In particular case, when <span class="math-container">$f(x) \in L_{1+\varepsilon}([a,b])$</span> for every <span class="math-container">$\varepsilon>0$</span> suitable function <span class="math-container">$g(x)$</span> exists. So, if counterexample exists, it's in <span class="math-container">$L_1$</span> and can't be in <span class="math-container">$L_{1+\varepsilon}$</span>.</p>
|
Selene
| 467,694 |
<p>Yes there is . </p>
<p>WLOG <span class="math-container">$f\geq 0$</span>. Let <span class="math-container">$M_t=\{f\geq t\}$</span> and <span class="math-container">$\lambda(t)= \int_{M_t}fdx$</span>. Then</p>
<p>1) <span class="math-container">$\lambda(t)>0 $</span>. </p>
<p>2) <span class="math-container">$ \lambda(t)\downarrow 0$</span> as <span class="math-container">$t\to\infty$</span>. </p>
<p>3) <span class="math-container">$x_0\in M_t^o$</span>, <span class="math-container">$\forall t\in\mathbb{R}_+$</span>.</p>
<p>Choose <span class="math-container">$\{t_n\}_n$</span> by induction, which satisfies:</p>
<p>1) <span class="math-container">$t_{n+1}>t_n$</span>.</p>
<p>2) <span class="math-container">$\lambda(t_n)\leq 3^{-n}$</span>.</p>
<p>Define <span class="math-container">$g:=+\infty \cdot I_{\{f=+\infty\}}+\sum_{n=1}^\infty 2^nI_{\{M_{t_n}-M_{t_{n+1}}\}}$</span>. Then</p>
<p><span class="math-container">$$\int fgdx= \sum_n 2^n\int_{M_{t_n}-M_{t_{n+1}}} fdx\leq\sum_n 2^n3^{-n}\leq 3.$$</span></p>
<p><span class="math-container">$fg$</span> is what we want.</p>
|
2,495,918 |
<p>A triangle is formed by the lines $x-2y-6=0$ , $ 3x−y+6=0$, $7x+4y−24=0$.</p>
<p>Find the equation of the line that bisects the inner angle of the triangle that is facing the side $7x+4y−24=0$.</p>
<p>I tried to find the intersect point of three equations by put them equal two by two. However, I don't know what to do next. could someone help me, please?</p>
|
Lubin
| 17,760 |
<p>This has nothing to do with triangle. You have two lines $x-2y=6$ and $3x-y=6$, intersecting at the point $P=(6/5,-12/5)$, and want an equation for the line bisecting the angle, presumably with positive slope. My suggestion, perhaps not as good as that of @GAVD, is to take any other point $Q_1$ on the first line, find its distance $d$ from $P$, and then find a point $Q_2$ on the second line at a distance of $d$ from $P$. Then build the rhombus whose vertices are, in order, $Q_1,P,Q_2,P'$, and draw the bisector from $P$ to $P'$. If you’re doing it by hand computation, it’ll be a mess. </p>
|
1,393,822 |
<p>For a complex number $w$, or $a+bi$, is there a specific term for the value $w\overline{w}$, or $a^2+b^2$?</p>
|
Mark Viola
| 218,419 |
<p>We have the inequalities</p>
<p><span class="math-container">$$\int_0^nx^{3/2}dx<\sum_{k=1}^n k^{3/2}<\int_0^{n+1}x^{3/2}dx$$</span></p>
<p>whereupon carrying out the integrals yields</p>
<p><span class="math-container">$$\frac25 n^{5/2}<\sum_{k=1}^n k^{3/2}<\frac25 (n+1)^{5/2}$$</span></p>
<p>The approximation of interest is simply the average of the upper and lower limits of the sum and is expressed as</p>
<p><span class="math-container">$$\bbox[5px,border:2px solid #C0A000]{\sum_{k=1}^n k^{3/2}\approx \frac{n^{5/2}+(n+1)^{5/2}}{5}}$$</span></p>
<p>A much better approximation is found using the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula#The_formula" rel="nofollow noreferrer">Euler-Maclaurin Formula</a> and is</p>
<p><span class="math-container">$$\bbox[5px,border:2px solid #C0A000]{\sum_{k=1}^n k^{3/2}=\frac25n^{5/2}+\frac12 n^{3/2}+\frac18n^{1/2}+C+\frac{1}{1920}n^{-3/2}+O(n^{-7/2})}$$</span></p>
<p>where the constant <span class="math-container">$C$</span> can be found numerically and is approximately given by <span class="math-container">$C\approx -0.025496493$</span>.</p>
|
368,789 |
<p>Suppose I have a family of elliptic curves $E_{n}/\mathbb{Q}$. I would like to determine the torsion subgroup of $E_{n}(\mathbb{Q})$ denoted by $E_{n}(\mathbb{Q})_{\textrm{tors}}$. Two ways to do this are using Nagell-Lutz and computing the number of points over $\mathbb{F}_{\ell}$ for various $\ell$. Are there other ways to determine the torsion subgroup of an elliptic curve?</p>
|
Matt E
| 221 |
<p>For a fixed curve over $\mathbb Q$, the easiest way is to check Cremona's tables (!), since it is pretty unlikely that your curve has conductor big enough not to be there. </p>
<p>Sorry for the cheeky answer; here is another slightly more serious one:</p>
<p>I think that using the methods you suggest is pretty
standard; as Don Antonio mentions, Mazur's theorem also gives a pretty
good absolute upper bound. </p>
<p>One alternative to actually working mod $\mathbb F_{\ell}$ for various $\ell$
is to compute that modular form attached to $E$ (or in practice, look it
up in a table). Then it is pretty easy to look for congruences
$a_{\ell} \equiv 1 + \ell mod p$ and hence at least determine the possible primes that divide the order of the torsion subgroup.</p>
<p>In the end, if you are using a table, as I said before this table will also likely just contain a precise description of the torsion subgroup. Still,
the idea of relating the structure of the torsion to congruences between
the modular form of $E$ an an Eisenstein series is important. (E.g. it
is the basic mechanism in Mazur's proof of this theorem.)</p>
<p>One thing to remember though is that the modular form, or equivalently
the number of $\mathbb F_{\ell}$ points, is an isogeny invariant, so
that even if computing $\mathbb F_{\ell}$-points suggests a $p$-torsion
point, there may not actually be such a point on your curve (but I guess
there will be on some $p$-isogenous curve). (You can see an example
in the isogeny class of $X_0(11)$, with $p = 5$.)</p>
|
46,236 |
<p>Apologies for the uninformative title, this is a relatively specific question so it was hard to title. </p>
<p>I'm solving the following recurrence relation:</p>
<blockquote>
<p>$a_{n} + a_{n-1} - 6a_{n-2} = 0$<br>
With initial conditions $a_{0} = 3$
and $a_{1} = 1$</p>
</blockquote>
<p>And I have it mostly figured out except for the very last part.</p>
<p>My working:</p>
<p>We have characteristic equation $s^2 + s - 6 = 0$
This factorises to $(s+3)(s-2)$<br>
Hence we have roots $s=-3$ and $s=2$</p>
<p>and hence the <strong>solution has the form $a_{n} = -x3^n + y2^n$</strong></p>
<p>We sub in the initial conditions:</p>
<p>$a_{0} = x + y = 3$<br>
$a_{1} = -3x+2y = 1$<br></p>
<p>And solving this system we have solutions: <br>
$x = 1$ and $y = 2$</p>
<p>Hence subbing this back to what we work out to be the general form of the solution: </p>
<p>$a_{n} = (-1)3^n + (2)2^n$ <br>
$a_{n} = (-3)^n + (4)^n$ Correct?</p>
<p>But it is incorrect, the correct solution is:</p>
<p>$a_{n} = (-3)^n + 2^{n+1}$</p>
<p>I don't understand where the $2^{n+1}$ came from. What am I missing here?</p>
|
Zev Chonoles
| 264 |
<p>You can't multiply expressions with exponents like that. $2\times (2^n)$ is equal to $2^{n+1}$, not
$$4^n=\underbrace{4\times\cdots\times 4}_{n\text{ times}}=\underbrace{(2\times 2)\times\cdots\times (2\times 2)}_{n\text{ times}}=\underbrace{2\times\cdots\times 2}_{2n\text{ times}}=2^{2n}.$$</p>
<p>Also, the general solution is
$$a_n=x(-3)^n+y2^n,$$
which is <strong>not</strong> the same as $-x3^n+y2^n$, which is what you wrote in the question. It so happens that since $x$ is 1, this discrepancy did not cause a problem, but you should be aware of the issue in general.</p>
|
65,631 |
<pre><code>Ticker[comp_String] :=
Interpreter["Company"][comp] /. Entity[_, x_] :> x
ticks = Ticker /@ {"Apple", "Google"}
</code></pre>
<blockquote>
<p>{"NASDAQ:AAPL", "NASDAQ:GOOGL"}</p>
</blockquote>
<pre><code>DateListPlot[{
FinancialData[ticks[[1]], "CumulativeFractionalChange", {2010}],
FinancialData[ticks[[2]], "CumulativeFractionalChange", {2010}],
FinancialData["NASDAQ100", "CumulativeFractionalChange", {2010}]
},
GridLines -> Automatic,
PlotLegends -> {ticks[[1]], ticks[[2]], "NASDAQ100"},
Joined -> True,
ImageSize -> 500,
Filling -> Bottom]
</code></pre>
<p><img src="https://i.stack.imgur.com/T6oFA.jpg" alt="enter image description here"></p>
<p>I have many questions, but only pose two:</p>
<p>(1) How can I efficiently apply a moving average of , let's say, 200 days to the above lines?</p>
<p>(2) How can I sort the <code>PlotLegends</code>? (NASDAQ100 should appear before NASDAQ:GOOGL)</p>
|
kglr
| 125 |
<pre><code>stripF = ToExpression[ToString[#, StandardForm]] &;
stripF /@ {Style[1201/100000, FontFamily -> "Charter", FontSize -> 20],
Style[NumberForm[10.01, {10, 2}], FontFamily -> "Academy Engraved LET",
FontSize -> 50, FontColor -> RGBColor[0, 1, 0]]}
(* {1201/100000, 10.01} *)
</code></pre>
<p>Also:</p>
<pre><code>CalculateUtilities`StringUtilities`Private`stripformatting /@
{Style[1201/100000, FontFamily -> "Charter", FontSize -> 20],
Style[NumberForm[10.01, {10, 2}], FontFamily -> "Academy Engraved LET",
FontSize -> 50, FontColor -> RGBColor[0, 1, 0]]}
(* {1201/100000, 10.010} *)
</code></pre>
|
2,165,759 |
<p>I am solving the following question</p>
<p>$$\int\frac{\sin x}{\sin^{3}x + \cos^{3}x}dx.$$</p>
<p>I have been able to reduce it to the following form by diving numerator and denominator by $\cos^{3}x$ and then substituting $\tan x$ for $t$ and am getting the following equation. Should Iis there any other way use partial fraction to integrate it further or </p>
<p>$$\int\frac{t}{t^3 + 1}dt.$$</p>
|
MrYouMath
| 262,304 |
<p>Hint:</p>
<p>$\int \frac{t}{t^3+1} dx = \int \frac{t+t^2-t^2}{t^3+1} dx =\int \frac{t(t+1)}{t^3+1} dx-\frac{1}{3}\int \frac{3t^2}{t^3+1} dx=\int \frac{t(t+1)}{(t+1)(t^2-t+1)} dx-\frac{1}{3}\ln(t^3+1)$</p>
<p>$=\frac{1}{2}\int \frac{2t}{t^2-t+1} dx-\frac{1}{3}\ln(t^3+1)=\frac{1}{2}\int \frac{2t-1+1}{t^2-t+1} dx-\frac{1}{3}\ln(t^3+1)$</p>
<p>$=\frac{1}{2}\int \frac{2t-1}{t^2-t+1} dx+\frac{1}{2}\int \frac{1}{t^2-t+1} dx-\frac{1}{3}\ln(t^3+1)$</p>
<p>$=\frac{1}{2}\ln(t^2-t+1)+\frac{1}{2}\int \frac{1}{t^2-t+1} dx-\frac{1}{3}\ln(t^3+1)$</p>
<p>The integral can be evaluated by partial fractions with complex roots.</p>
|
794,912 |
<p>I am reviewing Calculus III using <a href="http://www.jiblm.org/downloads/dlitem.aspx?id=82&category=jiblmjournal" rel="nofollow">Mahavier, W. Ted's material</a> and get stuck on one question in chapter 1. Here is the problem:</p>
<p>Assume $\vec{u},\vec{v}\in \mathbb{R}^3$. Find a vector $\vec{x}=(x,y,z)$ so that $\vec{x}\perp\vec{u}$ and $\vec{x}\perp\vec{v}$ and $x+y+z=1$.</p>
<p>My attempt:
From the last condition, I know that $\vec{x}$ ends at the plane intersecting the $x-,y-,z-$axis at $(1,0,0),(0,1,0)$ and $(0,0,1)$. From the orthogonal conditions, $\vec{x}$ is perpendicular to the plane formed by $\vec{u},\vec{v}$ if they are distinct, otherwise, any plane that contains $\vec{u},\vec{v}$. </p>
<p>Am I on the right track? And how do I go from here? Thanks!</p>
<p><strong>Edit</strong>: Thanks for all who responded! I do remember cross product. However, at this point of the book, the definition of cross product has not been introduced yet. I wonder whether there are other means to attack this problem without invoking a to-be-introduced concept?</p>
<p>Thanks again!</p>
|
rogerl
| 27,542 |
<p>You get three equations in the three unknowns $x$, $y$, and $z$ from $x+y+z=1$, $\vec{x}\cdot\vec{u} = 0$, and $\vec{x}\cdot\vec{v} = 0$. If these three equations have a solution, that is the vector you are looking for. However, they do not always have a solution (for example, try $\vec{u} = (1,1,1)$ and $\vec{v} = (1,0,1)$). There are other cases where they will not have a unique solution (where $\vec{u}$ and $\vec{v}$ are collinear).</p>
|
221,712 |
<p>I have two matrix <code>A</code> and <code>B</code> of equal dimensions see below. In <code>A</code> matrix I have the variables <code>a,b,c,d</code> which have direct correspondence with matrix <code>B</code> element by each row. In other words, for first row <code>{a, b, c, d}</code> we have <code>{2, 9, 6, 7}</code>, further for each element in both row <code>a=2, b=9, c=6 and d=7</code> similarly for other rows in both matrix. </p>
<pre><code>A={{a, b, c, d}, {d, c, b, a}, {a, c, b, d}};
B={{2, 9, 6, 7}, {11, 3, 5, 12}, {12, 4, 1, 4}};
</code></pre>
<p>After mapping these two matrix, I want to perform simple mathematical operations (addition and subtraction). For example, for first row:</p>
<pre><code>x1=a-d=2-7=-5
y1=b-a=9-2=7
</code></pre>
<p>similarly fir second row, </p>
<pre><code>x2=a-d=12-11=1
y2=b-a=5-12=-7
</code></pre>
<p>I can map these two matrix by <code>Map[A,B]</code>, but I don´t know how to map each element of both matrix. Is there a way we can map each element and then by using loop we evaluate <code>a-d, b-a</code> for each row?</p>
<p>Thanks in Advance </p>
|
xzczd
| 1,871 |
<p>How about:</p>
<pre><code>MapThread[Block[{a, b, c, d}, # = #2; {a - d, b - a}] &, {A, B}]
(* {{-5, 7}, {1, -7}, {8, -11}} *)
</code></pre>
|
2,222,215 |
<p>Determine whether the difference of the following two series is convergent or not and Prove your answer$$
\sum_{n=1}^\infty \frac{1}{n} $$ and $$\sum_{n=1}^\infty \frac{1}{2n-1} $$</p>
<p>What i tried. I said that the difference of the two series is divergent. My proof is as follows. Find the difference of the two series to get $$\sum_{n=1}^\infty \frac{1}{n} -\sum_{n=1}^\infty \frac{1}{2n-1} = \sum_{n=1}^\infty \frac{n-1}{n(2n-1)}$$
But this diffcult to prove directly that $\sum_{n=1}^\infty \frac{n-1}{n(2n-1)}$ is divergent. So i tired proving by contradiction by assuming that it is convergent and by rearraging the above equation we have,$\sum_{n=1}^\infty \frac{1}{n} =\sum_{n=1}^\infty \frac{1}{2n-1} + \sum_{n=1}^\infty \frac{n-1}{n(2n-1)}$ and since $\sum_{n-1}^\infty \frac{n-1}{n(2n-1)}$ is convergent by our assumption and $\sum_{n=1}^\infty \frac{1}{2n-1}$ is also convergent (need to be proven) then the sum of both series also have to be convergent and thus contradicting the fact that $\sum_{n=1}^\infty \frac{1}{n}$ is divergent and thus proving the statement. Is my proof correct and is there a better prove. Could anyone explain the Prove to me. Thanks</p>
|
Ben Grossmann
| 81,360 |
<p>For a direct comparison test, note that
$$
\frac{(n-1)}{n(2n - 1)} \geq \frac{n-1}{n(2n)} = \frac{n-1}{2n^2}
$$
If the sum converged, we would have
$$
\sum_{n=1}^\infty \frac{n-1}{2n^2} =
\sum_{n=1}^\infty \frac{n}{2n^2} -
\sum_{n=1}^\infty \frac{1}{2n^2}
$$</p>
|
2,555,815 |
<p><strong>Problem</strong></p>
<p>Let $a_{0}(n) = \frac{2n-1}{2n}$ and $a_{k+1}(n) = \frac{a_{k}(n)}{a_{k}(n+2^k)}$ for $k \geq 0.$</p>
<p>The first several terms in the series $a_k(1)$ for $k \geq 0$ are:</p>
<p>$$\frac{1}{2}, \, \frac{1/2}{3/4}, \, \frac{\frac{1}{2}/\frac{3}{4}}{\frac{5}{6}/\frac{7}{8}}, \, \frac{\frac{1/2}{3/4}/\frac{5/6}{7/8}}{\frac{9/10}{11/12}/\frac{13/14}{15/16}}, \, \ldots$$</p>
<p>What limit do the values of these fractions approach?</p>
<p><strong>My idea</strong></p>
<p>I have calculated the series using recursion in C programming, and it turns out that for $k \geq 8$, the first several digits of $a_k(1)$ are $ 0.7071067811 \ldots,$ so I guess that the limit exists and would be $\frac{1}{\sqrt{2}}$.</p>
|
Community
| -1 |
<p>This is not a full solution, either, just a remark following @Kelenner's trail of thought:<br>
$\displaystyle \sum_{q\geq 0}\frac{(-1)^{s_2(q)}}{n+q}$ is convergent, where $s_2(n)$ is the sum of 1-bits in the binary representation of $n$.<br>
<strong>Proof:</strong> Let $b_n=(-1)^{s_2(n)}$. According to Dirichlet's test, for the convergence of $\displaystyle \sum_{q\geq 0}\frac{b_q}{n+q}$, it is sufficient that the sequence of partial sums $\displaystyle B_n=\sum_{0\le q\le n}b_q$ is bounded. But we have $s_2(2q)=s_2(q)$ and $s_2(2q+1)=s_2(q)+1$, since we're just appending one bit $0$ or $1$ to the binary representation of $q$. This means $b_{2q}=b_q$ and $b_{2q+1}=-b_q$, and thus
$$B_{2n+1}=\sum^{2n+1}_{q=0}b_q=\sum^n_{q=0}(b_{2q}+b_{2q+1})=0,$$ while
$$B_{2n}=B_{2n-1}+b_{2n}=b_{2n}.$$ So we have $|B_n|\le1$.<br>
As Kelenner pointed out, this implies the existence of a finite $\displaystyle\lim_{m\to\infty}a_m(n)$ for every $n$.</p>
|
1,768,317 |
<p>Show that $\sin(x) > \ln(x+1)$ when $x \in (0,1)$. </p>
<p>I'm expected to use the maclaurin series (taylor series when a=0)</p>
<p>So if i understand it correctly I need to show that: </p>
<p>$$\sin(x) = \lim\limits_{n \rightarrow \infty} \sum_{k=1}^{n} \frac{(-1)^{k-1}}{(2k-1)!} \cdot x^{2k-1} > \lim\limits_{n \rightarrow \infty} \sum_{k=1}^{n} \frac{(-1)^{k-1}}{k} \cdot x^k = \ln(x+1)$$</p>
<p>I tried to show that for any k the general term in the bigger sum is greater then the other one (the general term in the smaller sum) but its not true :(.</p>
<p>$$\frac{(-1)^{k-1} \cdot x^{2k-1}}{(2k-1)!} > \frac{(-1)^{k-1} \cdot x^k}{k}$$</p>
<p>when k is odd we get:</p>
<p>$$\frac{x^{k-1}}{(2k-1)!} > \frac{1}{k}$$</p>
<p>and this is a contradiction since :</p>
<p>$x^{k-1} < 1$ for any $x \in (0,1)$ and $k > 1$ and $(2k-1)! > k $ </p>
<p>so for any $k > 1 $ its $\frac{x^{k-1}}{(2k-1)!} < \frac{1}{k}$ and if $k=1$ its $\frac{x^{k-1}}{(2k-1)!} = \frac{1}{k}$.</p>
<p>What am I doing wrong and how i'm supposed to prove it ? </p>
<p>Thanks in advance for help . </p>
|
xpaul
| 66,420 |
<p>Let $f(x)=\sin x-\ln(x+1)$. we try to show that $f(x)$ is increasing. In fact
$$ f'(x)=\cos x-\frac{1}{x+1}=\frac{(x+1)\cos x-1}{x+1}. $$
Now we show $(x+1)\cos x>1$ or $\sec x-1<x$ for $0<x<1$.
Note
\begin{eqnarray}
\sec x-1=\frac{1-\cos x}{\cos x}=\frac{2\sin^2\frac{x}{2}}{1-2\sin^2\frac{x}{2}}.
\end{eqnarray}
Since $\frac{2u}{1-2u}$ is increasing and $\sin x\le x$ for $x\in[0,\pi/2]$, one has
\begin{eqnarray}
\sec x-1=\frac{2\sin^2\frac{x}{2}}{1-2\sin^2\frac{x}{2}}\le \frac{x^2}{2-x^2}\le x^2<x.
\end{eqnarray}
Done.</p>
|
569,103 |
<blockquote>
<blockquote>
<p>How can I calculate the first partial derivative $P_{x_i}$ and the second partial derivative $P_{x_i x_i}$ of function:
$$
P(x,y):=\frac{1-\Vert x\rVert^2}{\Vert x-y\rVert^n}, x\in B_1(0)\subset\mathbb{R}^n,y\in S_1(0)?
$$</p>
</blockquote>
</blockquote>
<p>I ask this with regard to <a href="https://math.stackexchange.com/questions/568453/show-that-the-poisson-kernel-is-harmonic-as-a-function-in-x-over-b-10-setminu">Show that the Poisson kernel is harmonic as a function in x over $B_1(0)\setminus\left\{0\right\}$</a>.</p>
<p>I think it makes sense to ask this in a separate question in order to give details to my calculations.</p>
<hr>
<p><strong>First partial derivative:</strong></p>
<p>I use the quotient rule. To do so I set
$$
f(x,y):=1-\lVert x\rVert^2,~~~~~g(x,y)=\Vert x-y\rVert^n.
$$
Then I have to calculate
$$
\frac{f_{x_i}g-fg_{x_i}}{g^2}.
$$
Ok, I start with
$$
f_{x_i}=(1-\lVert x\rVert^2)_{x_i}=(1)_{x_i}-(\sum_{i=1}^n x_i^2)_{x_i}=-2x_i.
$$
Next is to use the chain rule:
$$
g_{x_i}=((\sum_{i=1}^{n}(x_i-y_i)^2)^{\frac{n}{2}})_{x_i}=\frac{n}{2}\lVert x-y\rVert^{n-2}(2x_i-2y_i)
$$</p>
<p>So all in all I get
$$
P_{x_i}=\frac{-2x_i\cdot\Vert x-y\rVert^n-(1-\lVert x\rVert^2)\cdot\frac{n}{2}\lVert x-y\rVert^{n-2}(2x_i-2y_i)}{\Vert x-y\rVert^{2n}}
$$</p>
<p>Is that correct? Can one simplify that?</p>
<p>I stop here. If you say it is correct I continue with calculatin $P_{x_i x_i}$.</p>
|
Paramanand Singh
| 72,031 |
<p>While I am still searching for a simple solution based on LHR, I found that method of Taylor series can also be applied without much difficulty. However we will need to make the substitution $\tan x = t$ so that $\sec x = \sqrt{1 + t^{2}}$ and as $x \to 0$ we also have $t \to 0$. We can do some simplification as follows
\begin{align}
A &= \lim_{x \to 0}\frac{2(1 + \sec x)\log \sec x - \tan x\{x + \log(\sec x + \tan x)\}}{x^{6}}\notag\\
&= \lim_{x \to 0}\frac{(1 + \sec x)\log(1 + \tan^{2}x) - \tan x\{x + \log(\sec x + \tan x)\}}{\tan^{6}x}\cdot\frac{\tan^{6}x}{x^{6}}\notag\\
&= \lim_{t \to 0}\frac{(1 + \sqrt{1 + t^{2}})\log(1 + t^{2}) - t\{\tan^{-1}t + \log(t + \sqrt{1 + t^{2}})\}}{t^{6}}\notag\\
&= \lim_{t \to 0}\frac{g(t) - h(t)}{t^{6}}\notag
\end{align}
The Taylor series expansions of functions in the numerator are easy to find with the exception of $\log(t + \sqrt{1 + t^{2}})$. But then we know that
\begin{align}
\log(t + \sqrt{1 + t^{2}}) &= \int_{0}^{t}\frac{dx}{\sqrt{1 + x^{2}}}\notag\\
&= \int_{0}^{t}\left(1 - \frac{x^{2}}{2} + \frac{3x^{4}}{8} + o(x^{4})\right)\,dx\notag\\
&= t - \frac{t^{3}}{6} + \frac{3t^{5}}{40} + o(t^{5})\notag\\
\end{align}
Thus we can see that
\begin{align}
h(t) &= t\{\tan^{-1}t + \log(t + \sqrt{1 + t^{2}})\}\notag\\
&= t\left(t - \frac{t^{3}}{3} + \frac{t^{5}}{5} + t - \frac{t^{3}}{6} + \frac{3t^{5}}{40} + o(t^{5})\right)\notag\\
&= 2t^{2} - \frac{t^{4}}{2} + \frac{11t^{6}}{40} + o(t^{6})\notag
\end{align}
And further
\begin{align}
g(t) &= (1 + \sqrt{1 + t^{2}})\log(1 + t^{2})\notag\\
&= \left(2 + \frac{t^{2}}{2} - \frac{t^{4}}{8} + o(t^{4})\right)\left(t^{2} - \frac{t^{4}}{2} + \frac{t^{6}}{3} + o(t^{6})\right)\notag\\
&= t^{2}\left(2 + \frac{t^{2}}{2} - \frac{t^{4}}{8} + o(t^{4})\right)\left(1 - \frac{t^{2}}{2} + \frac{t^{4}}{3} + o(t^{4})\right)\notag\\
&= t^{2}\left(2 - \frac{t^{2}}{2} + \frac{7t^{4}}{24} + o(t^{4})\right)\notag\\
&= 2t^{2} - \frac{t^{4}}{2} + \frac{7t^{6}}{24} + o(t^{6})\notag
\end{align}
It is now clear that $$A = \lim_{t \to 0}\frac{g(t) - h(t)}{t^{6}} = \frac{7}{24} - \frac{11}{40} = \frac{35 - 33}{120} = \frac{1}{60}$$ and hence $$L = \frac{A}{7} = \frac{1}{420}$$
Getting rid of trigonometric functions $\sec x, \tan x$ does help in having simpler Taylor series which require very little amount of calculation. The only trigonometric function is $\tan^{-1}t$ which has the simplest Taylor series.</p>
<hr>
<p>Another perhaps simpler approach via Taylor series would be to use the series for $\sec x, \tan x$ followed by integration (to get series for $\log \sec x$, $\log(\sec x + \tan x)$) and thus finding a Taylor series for $f(x)$ directly leading to $$f(x) = 1 + \frac{x^{4}}{420} + o(x^{4})$$ Thus we can start with $$\tan x = x + \frac{x^{3}}{3} + \frac{2x^{5}}{15} + o(x^{5})$$ and $$\sec x = \dfrac{1}{\cos x} = \dfrac{1}{1 - \dfrac{x^{2}}{2} + \dfrac{x^{4}}{24} + o(x^{4})} = 1 + \frac{x^{2}}{2} + \frac{5x^{4}}{24} + o(x^{4})$$ and on integrating the series for $\tan x, \sec x$ we get
\begin{align}
\log \sec x &= \frac{x^{2}}{2} + \frac{x^{4}}{12} + \frac{x^{6}}{45} + o(x^{6})\notag\\
\log(\sec x + \tan x) &= x + \frac{x^{3}}{6} + \frac{x^{5}}{24} + o(x^{5})\notag\\
\Rightarrow (1 + \sec t)\log \sec t &= \left(2 + \frac{t^{2}}{2} + \frac{5t^{4}}{24} + o(t^{4})\right)\left(\frac{t^{2}}{2} + \frac{t^{4}}{12} + \frac{t^{6}}{45} + o(t^{6})\right)\notag\\
&= t^{2}\left(2 + \frac{t^{2}}{2} + \frac{5t^{4}}{24} + o(t^{4})\right)\left(\frac{1}{2} + \frac{t^{2}}{12} + \frac{t^{4}}{45} + o(t^{4})\right)\notag\\
&= t^{2}\left(1 + \frac{5t^{2}}{12} + \frac{137t^{4}}{720} + o(t^{4})\right)\notag\\
&= t^{2} + \frac{5t^{4}}{12} + \frac{137t^{6}}{720} + o(t^{6})\notag
\end{align}
It follows that $$a(x) = 3\int_{0}^{x}(1 + \sec t)\log\sec t\,dt = x^{3}\left(1 + \frac{x^{2}}{4} + \frac{137x^{4}}{1680} + o(x^{4})\right)$$ and
\begin{align}
b(x) &= \{x + \log(\sec x + \tan x)\}\log \sec x\notag\\
&= \left(2x + \frac{x^{3}}{6} + \frac{x^{5}}{24} + o(x^{5})\right)\left(\frac{x^{2}}{2} + \frac{x^{4}}{12} + \frac{x^{6}}{45} + o(x^{6})\right)\notag\\
&= x^{3}\left(2 + \frac{x^{2}}{6} + \frac{x^{4}}{24} + o(x^{4})\right)\left(\frac{1}{2} + \frac{x^{2}}{12} + \frac{x^{4}}{45} + o(x^{4})\right)\notag\\
&= x^{3}\left(1 + \frac{x^{2}}{4} + \frac{19x^{4}}{240} + o(x^{4})\right)\notag
\end{align}
Thus we can see that
\begin{align}
f(x) &= \frac{a(x)}{b(x)} = \dfrac{1 + \dfrac{x^{2}}{4} + \dfrac{137x^{4}}{1680} + o(x^{4})}{1 + \dfrac{x^{2}}{4} + \dfrac{19x^{4}}{240} + o(x^{4})}\notag\\
&= 1 + px^{2} + qx^{4} + o(x^{4})\notag
\end{align}
where $$p + \frac{1}{4} = \frac{1}{4},\, q + \frac{p}{4} + \frac{19}{240} = \frac{137}{1680}$$ so that $$p = 0, q = \frac{137 - 133}{1680} = \frac{4}{1680} = \frac{1}{420}$$ and finally we get $$f(x) = 1 + \frac{x^{4}}{420} + o(x^{4})$$ as desired. This however does involve the division of power series one time and multiplication of power series 2 times (not to mention the use of not so familiar Taylor series for $\sec x, \tan x$ in the first place). The first approach which I have given in my answer uses the product of two series only one time and one application of LHR (given in the question itself).</p>
|
2,027,044 |
<p>Prove:
$$
(a+b)^\frac{1}{n} \le a^\frac{1}{n} + b^\frac{1}{n}, \qquad \forall n \in \mathbb{N}
$$
I have have tried using the triangle inequlity $ |a + b| \le |a| + |b| $, without any success.</p>
|
Community
| -1 |
<p><strong>Hint</strong>: You can show (equivalent) that $x\mapsto \sqrt[n]{x}$ is subadditive.</p>
|
1,023,193 |
<p>Proving this formula
$$
\pi^{2}
=\sum_{n\ =\ 0}^{\infty}\left[\,{1 \over \left(\,2n + 1 + a/3\,\right)^{2}}
+{1 \over \left(\, 2n + 1 - a/3\,\right)^{2}}\,\right]
$$
if $a$ an even integer number so that
$$
a \geq 4\quad\mbox{and}\quad{\rm gcd}\left(\,a,3\,\right) = 1
$$</p>
|
Felix Marin
| 85,343 |
<p>$\newcommand{\angles}[1]{\left\langle\, #1 \,\right\rangle}
\newcommand{\braces}[1]{\left\lbrace\, #1 \,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\, #1 \,\right\rbrack}
\newcommand{\ceil}[1]{\,\left\lceil\, #1 \,\right\rceil\,}
\newcommand{\dd}{{\rm d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,{\rm e}^{#1}\,}
\newcommand{\fermi}{\,{\rm f}}
\newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,}
\newcommand{\half}{{1 \over 2}}
\newcommand{\ic}{{\rm i}}
\newcommand{\iff}{\Longleftrightarrow}
\newcommand{\imp}{\Longrightarrow}
\newcommand{\pars}[1]{\left(\, #1 \,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\pp}{{\cal P}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\vphantom{\large A}\,#2\,}\,}
\newcommand{\sech}{\,{\rm sech}}
\newcommand{\sgn}{\,{\rm sgn}}
\newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}}
\newcommand{\ul}[1]{\underline{#1}}
\newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}$</p>
<blockquote>
<p>With $\ds{a\ \geq\ 4}$ an <i>even integer</i> and $\ds{{\rm gcd}\pars{a,3} = 1}$:</p>
</blockquote>
<p>\begin{align}&\color{#66f}{\large\sum_{n\ =\ 0}^{\infty}%
\bracks{{1 \over \pars{2n + 1 + a/3}^{2}} + {1 \over \pars{2n + 1 - a/3}^{2}}}}
\\[5mm]&={1 \over 4}\bracks{%
\sum_{n\ =\ 0}^{\infty}{1 \over \pars{n + 1/2 + a/6}^{2}}+
\sum_{n\ =\ 0}^{\infty}{1 \over \pars{n + 1/2 - a/6}^{2}}}
\\[5mm]&={1 \over 4}\bracks{%
\Psi'\pars{\half + {a \over 6}} + \Psi'\pars{\half - {a \over 6}}}
\end{align}</p>
<blockquote>
<p>where $\ds{\Psi\pars{z}}$ is the
<a href="http://people.math.sfu.ca/~cbm/aands/page_258.htm" rel="nofollow">Digamma Function</a>
$\ds{\bf 6.3.1}$ and we used the <a href="http://people.math.sfu.ca/~cbm/aands/page_260.htm" rel="nofollow">identity</a> $\ds{\bf 6.4.10}$
$\ds{\Psi^{\rm\pars{m}}\pars{z} = \pars{-1}^{\rm m}\,{\rm m}!\sum_{k\ =\ 0}^{\infty}{1 \over \pars{k + z}^{\rm m + 1}}\,,\ z\not=0,-1,-2,\ldots}$</p>
</blockquote>
<p>With <a href="http://people.math.sfu.ca/~cbm/aands/page_260.htm" rel="nofollow">Euler Reflection Formula</a>
${\bf 6.4.7}$
$\ds{\Psi^{\rm\pars{m}}\pars{1 - z}
+ \pars{-1}^{\rm m + 1}\Psi^{\rm\pars{m}}\pars{z}
=\pars{-1}^{\rm m}\,\pi\,\totald[\rm m]{\cot\pars{\pi z}}{z}}$ we'll get
\begin{align}&\color{#66f}{\large\sum_{n\ =\ 0}^{\infty}%
\bracks{{1 \over \pars{2n + 1 + a/3}^{2}} + {1 \over \pars{2n + 1 - a/3}^{2}}}}
={1 \over 4}\bracks{-\pi\,\totald{\cot\pars{\pi z}}{z}}_{z\ =\ 1/2\ -\ a/6}
\\[5mm]&={1 \over 4}\,\pi^{2}\csc^{2}\pars{\pi\bracks{\half - {a \over 6}}}
={1 \over \bracks{2\cos\pars{\pi a/6}}^{2}}\,\pi^{2}
\end{align}</p>
<blockquote>
<p>However $\ds{a = 4,8,10,14,16,20,22,26\,\ldots}$ By subtracting multiples of $\ds{6}$ to that sequence we'll get $\ds{4,2,4,2,4,2,4,2\ldots}$ It's clear that $\ds{\cos\pars{\pi a/6} = \pm 1/2}$ because $\ds{\cos\pars{\pi\, 4/6} = -1/2}$
and $\ds{\cos\pars{\pi\, 2/6} = 1/2}$. So,</p>
</blockquote>
<p>\begin{align}&\color{#66f}{\large\sum_{n\ =\ 0}^{\infty}%
\bracks{{1 \over \pars{2n + 1 + a/3}^{2}} + {1 \over \pars{2n + 1 - a/3}^{2}}}}
=
\color{#66f}{\large \pi^{2}}
\end{align}</p>
|
917,276 |
<p>If $U$ and $V$ are independent identically distributed standard normal, what is the distribution of their difference?</p>
<p>I will present my answer here. I am hoping to know if I am right or wrong.</p>
<p>Using the method of moment generating functions, we have</p>
<p>\begin{align*}
M_{U-V}(t)&=E\left[e^{t(U-V)}\right]\\
&=E\left[e^{tU}\right]E\left[e^{tV}\right]\\
&=M_U(t)M_V(t)\\
&=\left(M_U(t)\right)^2\\
&=\left(e^{\mu t+\frac{1}{2}t^2\sigma ^2}\right)^2\\
&=e^{2\mu t+t^2\sigma ^2}\\
\end{align*}
The last expression is the moment generating function for a random variable distributed normal with mean $2\mu$ and variance $2\sigma ^2$. Thus $U-V\sim N(2\mu,2\sigma ^2)$.</p>
<p>For the third line from the bottom, it follows from the fact that the moment generating functions are identical for $U$ and $V$.</p>
<p>Thanks for your input.</p>
<p>EDIT: OH I already see that I made a mistake, since the random variables are distributed STANDARD normal. I will change my answer to say $U-V\sim N(0,2)$.</p>
|
Alex
| 38,873 |
<p>With the convolution formula:
<span class="math-container">\begin{align}
f_{Z}(z) &= \frac{dF_Z(z)}{dz} = P'(Z<z)_z = P'(X<Y+z)_z = (\int_{-\infty}^{\infty}\Phi_{X}(y+z)\varphi_Y(y)dy)_z \\
&= \frac{1}{2 \pi}\int_{-\infty}^{\infty}e^{-\frac{(z+y)^2}{2}}e^{-\frac{y^2}{2}}dy = \frac{1}{2 \pi}\int_{-\infty}^{\infty}e^{-(y+\frac{z}{2})^2}e^{-\frac{z^2}{4}}dy = \frac{1}{\sqrt{2\pi\cdot 2}}e^{-\frac{z^2}{2 \cdot 2}}
\end{align}</span>
whichi is density of <span class="math-container">$Z \sim N(0,2)$</span>. Interchange of derivative and integral is possible because <span class="math-container">$y$</span> is not a function of <span class="math-container">$z$</span>, after that I closed the square and used Error function to get <span class="math-container">$\sqrt{\pi}$</span>. Integration bounds are the same as for each rv.</p>
|
1,005,291 |
<p>I understand that in order to prove this to be one to one, I need to prove $2$ numbers, $a$ and $b$, in the same set are equal. </p>
<p>This is what I did:</p>
<p>$$\sqrt{a} + a + 2 = \sqrt{b} + b + 2$$
$$\sqrt{a} + a = \sqrt{b} + b$$
$$a + a^2 = b + b^2$$</p>
<p>How would I arrive at $a = b$? Is it possible?</p>
|
Kim Jong Un
| 136,641 |
<p>You were basically done:
$$
\sqrt{a}+a=\sqrt{b}+b\implies 0=(\sqrt{a}-\sqrt{b})(\sqrt{a}+\sqrt{b}+1)\implies \sqrt{a}=\sqrt{b}\implies a=b.
$$</p>
|
1,640,383 |
<p>We have function $f:\mathbb{R}\rightarrow \mathbb{R}$ with $$f\left(x\right)=\frac{1}{3x+1}\:$$ $$x\in \left(-\frac{1}{3},\infty \right)$$
Write the Maclaurin series for this function.</p>
<p>Alright so from what I learned in class, the Maclaurin series is basically the Taylor series for when we have $x_o=0$ and we write the remainder in the Lagrange form. It has this shape:
$f\left(x\right)=\left(T_n;of\right)\left(x\right)+\left(R^Ln;of\right)\left(x\right)=\sum _{k=0}^n\left(\frac{f^{\left(k\right)}\left(0\right)}{k!}x^n+\frac{f^{\left(n+1\right)}\left(c\right)}{\left(n+1\right)!}x^{n+1}\right)$</p>
<p>So when I compute derivatives of my function I can see that the form they take is:$$f^{\left(n\right)}\left(x\right)=\left(-1\right)^n\cdot \frac{3^n\cdot n!}{n!}x^n$$</p>
<p>Does that mean that the Maclaurin series is basically: $$f\left(x\right)=1-3x+3^2x^2-3^3x^3+....+\left(-1\right)^n\cdot 3^n\cdot x^n$$
?</p>
<p>But what about that remainder in Lagrange form? I don't get that part. We didn't really have examples in class, so I've no idea if what I'm doing is correct. Can someone help me with this a bit?</p>
|
zz20s
| 213,842 |
<p>HINT: Use the fact that $\displaystyle \sum_\limits{n=0}^{\infty} x^n=\frac{1}{1-x}$ for $|x|<1$.</p>
|
936,611 |
<p>let $p,q$ is postive integer,and such
$$\dfrac{95}{36}>\dfrac{p}{q}>\dfrac{96}{37}$$</p>
<p>Find the minimum of the $q$</p>
<p>maybe can use
$$95q>36p$$
and $$37p>96q$$
and then find this minimum of the value?</p>
<p>before I find a
$$2.638\approx \dfrac{95}{36}>\dfrac{49}{18}\approx 2.722>\dfrac{96}{37}\approx 2.59 $$ is not such condition</p>
<p>idea 2: since
$$\dfrac{95}{36}=\dfrac{95\cdot 37}{36\cdot 37}=\dfrac{3515}{1332}$$
$$\dfrac{96}{37}=\dfrac{96\cdot 36}{36\cdot 37}=\dfrac{3456}{1332}$$
so
$$\dfrac{3515}{1332}>\dfrac{p}{q}>\dfrac{3456}{1332}$$
so
$$p\in(3456,3515),q=1332$$</p>
|
mathlove
| 78,967 |
<p>We have
$$2.64\gt a=\frac{17575}{5\cdot 36\cdot 37}=\frac{95}{36}\gt \color{red}{\frac{13}{5}}=2.6=\frac{17316}{5\cdot 36\cdot 37}\gt\frac{96}{37}=\frac{17280}{5\cdot 36\cdot 37}=b\gt 2.59.$$
Note that
$$\frac{11}{4}\gt a\gt b\gt\frac{10}{4}$$</p>
<p>$$\frac{8}{3}\gt a\gt b\gt\frac{7}{3}$$</p>
<p>$$\frac{6}{2}\gt a\gt b\gt\frac{5}{2}$$
$$\frac{3}{1}\gt a\gt b\gt\frac{2}{1}$$
Hence, the minimum of $q$ is $5$.</p>
|
936,611 |
<p>let $p,q$ is postive integer,and such
$$\dfrac{95}{36}>\dfrac{p}{q}>\dfrac{96}{37}$$</p>
<p>Find the minimum of the $q$</p>
<p>maybe can use
$$95q>36p$$
and $$37p>96q$$
and then find this minimum of the value?</p>
<p>before I find a
$$2.638\approx \dfrac{95}{36}>\dfrac{49}{18}\approx 2.722>\dfrac{96}{37}\approx 2.59 $$ is not such condition</p>
<p>idea 2: since
$$\dfrac{95}{36}=\dfrac{95\cdot 37}{36\cdot 37}=\dfrac{3515}{1332}$$
$$\dfrac{96}{37}=\dfrac{96\cdot 36}{36\cdot 37}=\dfrac{3456}{1332}$$
so
$$\dfrac{3515}{1332}>\dfrac{p}{q}>\dfrac{3456}{1332}$$
so
$$p\in(3456,3515),q=1332$$</p>
|
Jack D'Aurizio
| 44,121 |
<p>An interesting trick so solve such kind of problems is to consider the continued fraction of the LHS and the RHS. We have:
$$\frac{95}{36}=[2;1,1,1,3,3],\qquad \frac{96}{37}=[2;1,1,2,7]$$
hence
$$\frac{13}{5}=[2;1,1,2]$$
just lies between the LHS and the RHS, and it is the rational number with the smallest denominator lying in that interval.</p>
|
4,379,693 |
<p>How to change the order of integration:</p>
<p><span class="math-container">$$\int_{-1}^1dx \int_{1-x^2}^{2-x^2}f(x,y)dy$$</span></p>
<p>I tried to sketch the area and got:</p>
<p><a href="https://i.stack.imgur.com/xxu5T.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xxu5T.png" alt="enter image description here" /></a></p>
<p>Where red line is <span class="math-container">$x^2< 1$</span>, green lines are <span class="math-container">$1-y$</span> and <span class="math-container">$2-y$</span>. However I can't seem to get it right...</p>
<p>The solution should be:</p>
<p><a href="https://i.stack.imgur.com/1Hb6O.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1Hb6O.png" alt="enter image description here" /></a></p>
|
callculus42
| 144,421 |
<p>I simplify the term as far I can think of</p>
<p><span class="math-container">$$\frac{1}{n^2}Var[\sum_{i=1}^n (X_i-E[X])^2]$$</span></p>
<p>Next you multiply out the square brackets and leaving variance opterator still out of the brackets.</p>
<p><span class="math-container">$$\frac{1}{n^2}Var[\sum_{i=1}^n (X_i^2-2E[X]X_i+E[X]^2)]$$</span></p>
<p><span class="math-container">$$\frac{1}{n^2}Var[\sum_{i=1}^n X_i^2-2E[X]\sum_{i=1}^n X_i+\sum_{i=1}^nE[X]^2)]$$</span></p>
<p><span class="math-container">$E[X]^2$</span> is a constant and its variance is 0.</p>
<p><span class="math-container">$$\frac{1}{n^2}Var[\sum_{i=1}^n X_i^2-2E[X]\sum_{i=1}^n X_i+n\cdot E[X]^2)]$$</span></p>
<p><span class="math-container">$$\frac{1}{n^2}[Var\left(\sum_{i=1}^n X_i^2\right)-4E[X]^2Var\left(\sum_{i=1}^n X_i\right)]$$</span></p>
<p>Since <span class="math-container">$\sum_{i=1}^n X_i=1$</span>, we have a variance of a constant again.</p>
<p><span class="math-container">$$\frac{1}{n^2}\cdot Var\left(\sum_{i=1}^n X_i^2\right)$$</span></p>
<p>To calculate/estimate it explicity further informations are needed.</p>
|
233,915 |
<p>a,b are elements of the group G</p>
<p>I have no idea how to even start - I was thinking of defining a,b as two square matrices and using the non-commutative property of matrix multiplication but I'm not sure if that's the way to go...</p>
|
Andreas Blass
| 48,510 |
<p>Since you want this to be in a group, take the group of permutations of $\{1,2,3\}$ and take the counterexample $a$ and $b$ to be two distinct transpositions, say $(1,2)$ and $(2,3)$.</p>
|
3,375,181 |
<p>How do I graph f(x)=1/(1+e^(1/x)) except for replacing variable x with numbers?
Besides, I get the picture of the answer online <a href="https://i.stack.imgur.com/gZb7X.png" rel="nofollow noreferrer">enter image description here</a> and do not understand why x = 0 exists on this graph.</p>
|
Cesareo
| 397,348 |
<p>Hint.</p>
<p>From</p>
<p><span class="math-container">$$
y y''-2(y')^2=y^2\Rightarrow \frac{y''}{y}-2\left(\frac{y'}{y}\right)^2=1
$$</span></p>
<p>but</p>
<p><span class="math-container">$$
\left(\frac{y'}{y}\right)' = -\left(\frac{y'}{y}\right)^2+\frac{y''}{y}
$$</span></p>
<p>then</p>
<p><span class="math-container">$$
\left(\frac{y'}{y}\right)'-\left(\frac{y'}{y}\right)^2 = 1
$$</span></p>
<p>now calling <span class="math-container">$\zeta = \frac{y'}{y}$</span> we have</p>
<p><span class="math-container">$$
\zeta'-\zeta^2 = 1
$$</span></p>
<p>which is separable.</p>
|
902,313 |
<p>The wikipedia page on clopen sets says "Any clopen set is a union of (possibly infinitely many) connected components." </p>
<p>I thought any topological space is the union of its connected components? Why is this singled out here for clopen sets?</p>
<p>Does it have something to do with it $x\in C$ a clopen subset $C$ of a space $X$, then $C$ actually contains the entire component of $x$ in $X$?</p>
|
Hamou
| 165,000 |
<p>Here is what I mean: If $C$ is clopen subset of a space $X$, then $C=\cup_{x\in C}C_x$, where $C_x$ is the connected component in $X$ containing $x$.<br>
It is clear that $C\subset \cup_{x\in C}C_x$.<br>
Let $x\in C$, and we want to show that $C_x\subset C$. Now we take $A=C_x\cap C$ and $B=C_x\cap C^c$, we have $A\cup B=C_x$ and $A\cap B=\emptyset $ and $A$, $B$ are closed subsets in $C_x$ hence opens subsets in $C_x$, since $C_x$ is connected then $A=\emptyset$ or $B=\emptyset$, but $x\in A$, so $B=\emptyset$, this show that $A=C_x$, $i.e$ $C_x\subset C$. </p>
|
459,428 |
<p>How does one evaluate a function in the form of
$$\int \ln^nx\space dx$$
My trusty friend Wolfram Alpha is blabbering about $\Gamma$ functions and I am having trouble following. Is there a method for indefinitely integrating such and expression? Or if there isn't a method how would you tackle the problem?</p>
|
OR.
| 26,489 |
<p>Write $x=e^y$, and $\text{d}x=e^y\text{d}y$, and integrate by parts a few times $$\int \ln^n(x)\text{d}x\text=\int y^{n}e^{y}\text{d}y=y^ne^y-n\int y^{n-1}e^y\text{d}y.$$ </p>
|
4,298,951 |
<p>Let us define a sequence <span class="math-container">$(a_n)$</span> as follows:</p>
<p><span class="math-container">$$a_1 = 1, a_2 = 2 \text{ and } a_{n} = \frac14 a_{n-2} + \frac34 a_{n-1}$$</span></p>
<p>Prove that the sequence <span class="math-container">$(a_n)$</span> is Cauchy and find the limit.</p>
<hr />
<p>I have proved that the sequence <span class="math-container">$(a_n)$</span> is Cauchy. But unable to find the limit. I have observed that the sequence <span class="math-container">$(a_n)$</span> is decreasing for <span class="math-container">$n \ge 2$</span>.</p>
|
Pedro Ignacio Martinez Bruera
| 616,313 |
<p>For the limit I would treat it a second-order difference equation:
<span class="math-container">$$
4a_{n} -3a_{n-1} - a_{n-2}=0
$$</span>
Conjecture the solution is <span class="math-container">$a_{n}=C_{1}b^{n}$</span>. Then <span class="math-container">$b=1$</span> or <span class="math-container">$b=\frac{1}{4}$</span>. So
<span class="math-container">$$
a_{n}=C_{1}+C_{2}\left(-\frac{1}{4}\right)^{n}
$$</span>
Using the initial conditions
<span class="math-container">$$4 = 4C_{1}-C_{2}$$</span>
<span class="math-container">$$32 = 16C_{1}+C_{2}$$</span>
Solve for <span class="math-container">$C_{1}$</span> and <span class="math-container">$C_{2}$</span>. Finally, notice that since <span class="math-container">$\frac{1}{4}<1$</span> then <span class="math-container">$\lim_{n\to\infty}a_{n}=C_{1}$</span>. In this setting <span class="math-container">$C_{1}=\frac{9}{5}$</span></p>
|
4,298,951 |
<p>Let us define a sequence <span class="math-container">$(a_n)$</span> as follows:</p>
<p><span class="math-container">$$a_1 = 1, a_2 = 2 \text{ and } a_{n} = \frac14 a_{n-2} + \frac34 a_{n-1}$$</span></p>
<p>Prove that the sequence <span class="math-container">$(a_n)$</span> is Cauchy and find the limit.</p>
<hr />
<p>I have proved that the sequence <span class="math-container">$(a_n)$</span> is Cauchy. But unable to find the limit. I have observed that the sequence <span class="math-container">$(a_n)$</span> is decreasing for <span class="math-container">$n \ge 2$</span>.</p>
|
xpaul
| 66,420 |
<p>Using
<span class="math-container">$$\displaystyle a_{n}-a_{n-1}=-\frac{1}{4}\left(a_{n-1}-a_{n-2}\right)$$</span>
one has, for <span class="math-container">$\forall n, p\in\mathbb{N}$</span>,
<span class="math-container">$$ |a_{n+p}-a_n|=|\sum_{k=0}^{p-1}(a_{n+k+1}-a_{n+k})|\le\sum_{k=0}^{p-1}\bigg(\frac{1}{4}\bigg)^{k}|a_{n+1}-a_n|\le\bigg(\frac{1}{4}\bigg)^{n}|a_2-a_1|. $$</span>
This implies that <span class="math-container">$\{a+n\}$</span> is Cauchy.</p>
|
355,888 |
<p>Consider
$x''-2x'+x= te^t$</p>
<p>Determine the solution with initial values $x(1) = e,$ $x'(1) = 0.$</p>
<p>I know this looks like and probably is a very easy question, but i'm not getting the right answer when i try and solve putting into quadratic form. Could someone please demonstrate or show me a different method? </p>
<p>Many thanks :)</p>
|
Felix Marin
| 85,343 |
<p><span class="math-container">$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\on}[1]{\operatorname{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</span>
<span class="math-container">$\ds{\bbox[5px,#ffd]{\on{x}''\pars{t} -2\on{x}'\pars{t} + \on{x}\pars{t} = t\expo{t}}\,,\qquad
\begin{array}{c}
\mbox{Initial Conditions}
\\
\left\{\begin{array}{rcl}
\ds{\on{x}\pars{1}} & \ds{=} & \ds{\expo{}}
\\[1mm]
\ds{\on{x}'\pars{1}} & \ds{=} & \ds{0}
\end{array}\right.\end{array}}$</span></p>
<hr>
<ul>
<li><span class="math-container">$\ds{\pars{\totald{}{t} - 1}\
\overbrace{\pars{\totald{}{t} - 1}
\on{x}\pars{t}}^{\ds{\on{y}\pars{t}}}\ =\ t\expo{t}}$</span></li>
</ul>
<hr>
<span class="math-container">\begin{align}
&\totald{\bracks{\expo{-t}\on{y}\pars{t}}}{t} = t
\\[2mm]
&\expo{-t}\on{y}\pars{t} = {1 \over 2}\,t^{2} + a \implies
\on{y}\pars{t} =
{1 \over 2}\,t^{2}\expo{t} + a\expo{t}\,\quad a = \mbox{constant}
\end{align}</span>
<hr>
<ul>
<li><span class="math-container">$\ds{\pars{\totald{}{t} - 1}\on{x}\pars{t} =
{1 \over 2}\,t^{2}\expo{t} + a\expo{t}}$</span></li>
</ul>
<hr>
<span class="math-container">\begin{align}
&\totald{\bracks{\expo{-t}\on{x}\pars{t}}}{t} =
{1 \over 2}\,t^{2} + a
\\[2mm] &\
\expo{-t}\on{x}\pars{t} = {1 \over 6}\,t^{3} + at + b\,,\quad b = \mbox{constant}
\\[2mm] &\
\on{x}\pars{t} = \pars{{1 \over 6}\,t^{3} + at + b}\expo{t}
\\[2mm] &\
\left\{\begin{array}{rcrcl}
\ds{\expo{}a} & \ds{+} & \ds{\expo{}b} & \ds{=} &
\ds{{5 \over 6}\,\expo{}}
\\[1mm]
\ds{2\expo{}a} & \ds{+} & \ds{\expo{}b} & \ds{=} &
\ds{-\,{2 \over 3}\,\expo{}}
\end{array}\right.
\\[2mm] &\ \implies a = -\,{3 \over 2}\ \mbox{and}\
b = {7 \over 3}
\\[5mm] &\
\bbx{\on{x}\pars{t} = {1 \over 6}\pars{t^{3} -
9t + 14}\expo{t}} \\ &
\end{align}</span>
|
407,890 |
<p>Here comes a second sophisticated version of my conjecture, as critics came up the <a href="https://math.stackexchange.com/questions/407812/conjecture-on-combinate-of-positive-integers-in-terms-of-primes">first version</a> was trivial.</p>
<p>Teorem <a href="https://oeis.org/A226233" rel="nofollow noreferrer">2</a></p>
<p>for a given prime $p$ and a given power $m$ the representation of any positive integer $n\in \Bbb N$ in the form:
$$ n=(a_u p - b_u) \; p^m$$
is <em>unique</em> with the coefficient pairs (OEIS: <a href="https://oeis.org/A226233" rel="nofollow noreferrer">A226233</a>, <a href="https://oeis.org/A226236" rel="nofollow noreferrer">A226236</a>)
$$ \left\{
\begin{array}{l l}
\langle a_u \rangle=1+\left\lfloor\frac{u-1}{p-1}\right\rfloor=\frac{(p-1)+u-1-((u-1)mod(p-1))}{p-1}\\
\langle b_u \rangle=u-(p-1)\left\lfloor\frac{u-1}{p-1}\right\rfloor=1+((u-1)mod(p-1))
\end{array} \right.$$</p>
<p>while $a_u,b_u,u\in \Bbb N$ and $m\in \Bbb N_0$.</p>
<p>Can anyone help raising the proof? Or connect to another existing unsolved conjecture?</p>
<p>Note: $\lfloor\cdot \rfloor$ denotes the floor function.</p>
<p><a href="https://oeis.org/A226233" rel="nofollow noreferrer">2</a>: Theorem to be cited Vaseghi 2013</p>
|
Ross Millikan
| 1,827 |
<p>I don't know why you subscript $a,b$ with $u$ or where $u$ comes from. If you say $p^m$ is the highest power of $p$ that divides $n$ this is essentially the Euclidean division algorithm. $b_u$ is the remainder term, here in the range $[1,p-1]$. Since you do not say $p^m$ is the largest power of $p$ dividing $n$, we can decrement $m$ by $1$ and multiply $a_u, b_u$ by $p$ and get another representation.</p>
|
1,458,579 |
<p>Suppose $f_n$ and $g_n$ be two sequences of functions. Also, $f_n.g_n$ converges to $f.g$ and $g_n$ converges to $g$. Can we prove $f_n$ converges to $f$? How?</p>
|
vadim123
| 73,324 |
<p>Forget about every collection except the second. It starts with a bit, say (without loss of generality) a zero bit. Now, you generate additional random bits and stop when you get a one. There is a $p$ probability that the very next bit is a $1$, so your second collection has size $1$. There is a $(1-p)p$ probability that you get one more zero, and then a $1$, in which case your second collection has size $2$. Can you continue?</p>
|
197,441 |
<p>I have a list,</p>
<pre><code>l1 = {{a, b, 3, c}, {e, f, 5, k}, {n, k, 12, m}, {s, t, 1, y}}
</code></pre>
<p>and want to apply differences on the third parts and keep the parts right of the numerals collected.</p>
<p>My result should be</p>
<pre><code>l2 = {{2, c, k}, {7, k, m}, {-11, m, y}}
</code></pre>
<p>I tried Map and MapAt, but I could not get anywhere. I could work around split things up and connect again. But is there a better way to do it?</p>
|
Michael E2
| 4,999 |
<p>Perhaps this?:</p>
<pre><code>l1 = {{a, b, 3, c}, {e, f, 5, k}, {n, k, 12, m}, {s, t, 1, y}};
l2 = Differences[l1[[All, 3 ;;]]] /. b_ - a_ :> Sequence[a, b]
(* {{2, c, k}, {7, k, m}, {-11, m, y}} *)
</code></pre>
<p>It assumes the letter symbols are simple and not complicated expressions.</p>
<p>This is more complicated, but more robust:</p>
<pre><code>Flatten /@
Transpose@
MapAt[Differences,
Partition[Transpose@l1[[All, 3 ;;]], {1, 2}, {1, 1}], {1, All, 1}]
</code></pre>
|
65,912 |
<p>How do I show that $s=\sum\limits_{-\infty}^{\infty} {1\over (x-n)^2}$ on $x\not\in \mathbb Z$ is differentiable without using its compact form? I realize that the sequence of sums $s_a=\sum\limits_{-a}^{a} {1\over (x-n)^2}$ is not uniformly convergent. </p>
<p>I also tried to prove that it is continuous by using the usual $\varepsilon \over 3$ method. And it seems to apply because each $s_a$ is continuous and they converge pointwise to $s$. But then I realized that this should not be the right proof because I didn't use any special property of the given functions and the general case only works for uniform convergence. I am very confused. Please help!</p>
<p>Actually I do realize that if I could prove differentiability, continuity follows.</p>
<p>Thanks.</p>
|
Zarrax
| 3,035 |
<p>There's a theorem they teach in undergraduate analysis classes which says that if $\{f_n(x)\}$ are $C^1$ functions on an interval $[a,b]$ such that $|f_n(x)| \leq M_n$ and
$|f_n'(x)| \leq N_n$ where $\sum_n M_n$ and $\sum_n N_n$ are both finite, then $\sum_n f_n(x)$ is a differentiable function whose derivative is $\sum_n f_n'(x)$. </p>
<p>You can apply this result to your series on any interval $[k + \epsilon, k + 1 - \epsilon]$; if $n \geq 2|k| + 2$ for example ${1 \over (x - n)^2} \leq {4 \over n^2}$ and similarly the derivative ${2 \over (x - n)^3}$ is of absolute value at most ${16 \over |n|^3}$; these inequalities follow from the fact that $|x| \leq {n \over 2}$ and therefore $|x - n| \geq {n \over 2}$ when $x$ is in the interval $[k + \epsilon, k + 1 - \epsilon]$. (The terms for either series when $n < 2|k| + 2$ have uniform bounds on the interval simply because they're continuous. Hence these earlier terms don't affect the applicability of the result.)</p>
<p>Note the same argument applied repeatedly shows the limit function is $C^{\infty}$, and there is an analytic version of this that shows the resulting function is analytic except at the integers.</p>
|
1,923,034 |
<p>A bagel store sells six different kinds of bagels. Suppose you choose 15 bagels at random. What is the probability that your choice contains at least one bagel of each kind? If one of the bagels is Sesame, what is the probability that your choice contains at least three Sesame bagels?</p>
<p>My approach to the first problem was the equation $x_1+x_2+x_3+x_4+x_5+x_6=15, x_i \geq 1$ which is the same as $y_1+y_2+y_3+y_4+y_5+y_6=9, y_i \geq 0$ which has $ 14 \choose 9$ solutions. So that is 2002 solutions. And there are in total $22 \choose 15$ solutions to the equation without the restriction. So in a percentage, there is a $\frac{2002}{15504}$ or 12.9% we will get one of each kind.</p>
<p>For the second problem, I used the equation $x_1+x_2+x_3+x_4+x_5+x_6=15, x_1 \geq 3, x_i \geq 0, i \neq 1$. This gives $17 \choose 12$ solutions, which gives a $\frac{6188}{15504}$ or a 39% chance of getting a sesame bagel. </p>
<p>Is my approach for both of these right? (The percentages of these happening seem really high)</p>
|
Jack D'Aurizio
| 44,121 |
<p>The first probability is given by
$$ \frac{{15 \brace 6}\cdot 6!}{6^{15}}\approx \color{red}{64,\!42\%} $$
since a working choice is associated with a bijective function from $[1,15]$ to $[1,6]$. ${15\brace 6}$ is a <a href="https://en.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind" rel="nofollow">Stirling number of the second kind</a>, that can be computed through the <a href="https://en.wikipedia.org/wiki/Inclusion%E2%80%93exclusion_principle" rel="nofollow">inclusion-exclusion principle</a>. The second probability is given by
$$ \frac{1}{6^{15}}\sum_{k=3}^{15}\binom{15}{k}5^{15-k}\approx \color{red}{46,\!78\%}$$
since a working choice is associated with a function $f$ from $[1,15]$ to $[1,6]$ with the property that $\left|f^{-1}(1)\right|\geq 3$.</p>
|
116,537 |
<p>Let's say that I have</p>
<pre><code>x^2+x
</code></pre>
<p>Is there a way to map $x$ to the first derivative of a function and $x^2$ to the second derivative of the same function? According to <a href="http://reference.wolfram.com/language/ref/Slot.html" rel="nofollow">http://reference.wolfram.com/language/ref/Slot.html</a>, I know that I can change </p>
<pre><code>x /. x -> D[#, {y, 1}] &[a[y, z]]
x^2 /. x^2 -> D[#, {y, 2}] &[a[y, z]]
</code></pre>
<p>to yield the first and second derivatives of $a(y,z)$, respectively. I also know that I can use</p>
<pre><code>x^2+x/. x^2 -> D[#, {y, 2}] &[a[y, z]] /. x -> D[#, {y, 1}] &[a[y, z]]
</code></pre>
<p>but if I have say, a polynomial of degree 70, manually telling Mathematica to do this is highly inefficient. Is there a method, such that, for $x^n$, I can tell Mathematica to map $x^n$ to the $n^{th}$ derivative of a function?</p>
|
Kuba
| 5,478 |
<pre><code>{x, x^2, x^2 + x} /. x^n_. :> Derivative[n, 0][a][y, z]
</code></pre>
|
3,088,766 |
<p>I need to prove that the premise <span class="math-container">$A \to (B \vee C)$</span> leads to the conclusion <span class="math-container">$(A \to B) \vee (A \to C)$</span>. Here's what I have so far.</p>
<p><a href="https://i.stack.imgur.com/1AgTZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1AgTZ.png" alt="enter image description here" /></a></p>
<p>From here I'm stuck (and I'm not even sure if this is correct). My idea is to use negation intro by assuming the opposite and coming up with a contradiction. I assumed <span class="math-container">$A$</span> which led to <span class="math-container">$B \vee C$</span> and, as you can see, I'm trying or elim but the only way I can think of doing this is to use conditional intro and then or intro but that seems to only work for a single subproof. In other words, I can't use the assumption of <span class="math-container">$B$</span> to say <span class="math-container">$A \to B$</span>. This is called an indirect proof.</p>
|
Bram28
| 256,001 |
<p>It is always a <em>very bad</em> sign when someone has started a bunch of subproofs without indicating what happens at the end of the subproof.</p>
<p>A proof should always have a <em>plan</em> or <em>outline</em>, and subproofs provide the skeleton to do so. But again, you need to indicate what you want to do with the subproof, and that involves indicating what you want as the last line of your subproof. You haven't done that for any of the three subproofs you started, which is exactly why you get in trouble, and get see the forest for the trees.</p>
<p>Now, it is clear that with your first subproof you are hoping to do a proof by contradiction. So, start by creating the proper setup for that:</p>
<p><span class="math-container">$A \rightarrow (B \lor C)$</span></p>
<p><span class="math-container">$\quad \neg ((A \rightarrow B) \lor (A \rightarrow C))$</span></p>
<p><span class="math-container">$\quad \text{... skip a bunch of lines ...}$</span></p>
<p><span class="math-container">$\quad \bot$</span></p>
<p><span class="math-container">$\neg \neg ((A \rightarrow B) \lor (A \rightarrow C))$</span></p>
<p><span class="math-container">$(A \rightarrow B) \lor (A \rightarrow C)$</span></p>
<p>Ok, now that we have set that up properly, let's go back inside the subproof, and see how we can derive the contradiction from the premise and the assumption.</p>
<p>Now, it is at this point that you assume <span class="math-container">$A$</span>. Why?</p>
<p>Actually, I think I know why, because I have seen it all too often: you are probably thinking "ooh, it would be nice to have <span class="math-container">$A$</span>, because then I can combine that with the premise! OK, so let's jist assume <span class="math-container">$A$</span>"</p>
<p>OK, the problem with this kind of thinking is that you end up <em>assuming</em> something that you <em>want</em> ... which is always a bad ide, as that will often lead to a circular proof. Indeed, suppose you were indeed to combone <span class="math-container">$A$</span> with the premise, and get <span class="math-container">$B \lor C$</span> ... OK .... now what?! Well, one thing you can do is to then close the subproof and conclude <span class="math-container">$A \rightarrow (B \lor C)$</span> .... but note that now you just get the very premise, i.e. You are getting nowhere.</p>
<p>Here is some general advice on subproofs, that goes back to my initial point about having a plan: before starting any subproof, you should already know how you are going to use that subproof, and in particular, what the last line of your subproof should be, and what rule you will apply after the subproof is done.</p>
<p>Ok, let's regroup. There is really no good reason to assume <span class="math-container">$A$</span>. Ok, but what should you do on line 3? Well, again, if you <em>had</em> <span class="math-container">$A$</span>, you could combine that with the premise, but rather than <em>assuming</em> <span class="math-container">$A$</span>, you could try and make <span class="math-container">$A$</span> your new goal. And, to prove <span class="math-container">$A$</span>, one thing you could do is to assume <span class="math-container">$\neg A$</span>, and show that that leads to a contradiction.</p>
<p>However, there is a much more straightforward thing to do. The assumption <span class="math-container">$\neg ((A \rightarrow B) \lor (A \rightarrow C))$</span> is a negation of a disjunction, and you are probably aware of how by DeMorgan that is equivalent to the conjunction of the negated disjuncts, i.e. to <span class="math-container">$\neg (A \rightarrow B) \land \neg (A \rightarrow C)$</span>. Now, I suspect you don't have DeMorgan as an inference rule in your specific system, but think about it this way: apprently you should be able to derive both <span class="math-container">$\neg (A \rightarrow B)$</span> as well as <span class="math-container">$\neg (A \rightarrow C)$</span> from the Assumption. Now, both of those statements are negations, and you probably know the best strategy to prove negations: Proof bu Contradiction!</p>
<p>OK, so we have another piece of our plan:</p>
<p><span class="math-container">$A \rightarrow (B \lor C)$</span></p>
<p><span class="math-container">$\quad \neg ((A \rightarrow B ) \lor (A \rightarrow C))$</span></p>
<p><span class="math-container">$\quad \quad A \rightarrow B$</span></p>
<p><span class="math-container">$\quad \quad \text{skip a few lines...}$</span></p>
<p><span class="math-container">$\quad \quad \bot$</span></p>
<p><span class="math-container">$\quad \neg (A \rightarrow B)$</span></p>
<p><span class="math-container">$\quad \quad (A \rightarrow C)$</span></p>
<p><span class="math-container">$\quad \quad \text{skip a few lines ...}$</span></p>
<p><span class="math-container">$\quad \quad \bot$</span></p>
<p><span class="math-container">$\quad \neg (A\rightarrow C)$</span></p>
<p><span class="math-container">$\quad \text{few lines ...}$</span></p>
<p><span class="math-container">$\quad \bot$</span></p>
<p><span class="math-container">$\neg \neg ((A \rightarrow B) \lor (A \rightarrow C))$</span></p>
<p><span class="math-container">$(A \rightarrow B) \lor (A \rightarrow C)$</span></p>
<p>OK, see how this is all nicely organized? How you now have an outline, to which you can add details and provide the missing steps at some later point? <em>That</em> is what you are supposed to do! That is how you keep your proof, and your very thinking organized. Indeed, the very point of doing formal logic proofs is to teach you that very skill of careful organization!</p>
<p>Now, I am going to leave those details to you, but leave you with one more hint: what is <span class="math-container">$\neg (A \rightarrow B)$</span> equivalent to? .... try and derive that, do the same for <span class="math-container">$\neg (A \rightarrow C)$</span>, and you're pretty much done! Good luck!</p>
|
3,325,340 |
<p>Show that <span class="math-container">$$ \lim\limits_{(x,y)\to(0,0)}\dfrac{x^2y^2}{x^2+y^2}=0$$</span>
My try:
We know that, <span class="math-container">$$ x^2\leq x^2+y^2 \implies x^2y^2\leq (x^2+y^2)y^2 \implies x^2y^2\leq (x^2+y^2)^2$$</span>
Then, <span class="math-container">$$\dfrac{x^2y^2}{x^2+y^2}\leq x^2+y^2 $$</span>
So we chose <span class="math-container">$\delta=\sqrt{\epsilon}$</span></p>
|
Dr. Sonnhard Graubner
| 175,066 |
<p>Or alternatively, by AM-GM we get
<span class="math-container">$${x^2+y^2}\geq 2|xy|$$</span> so
<span class="math-container">$$\frac{x^2y^2}{x^2+y^2}\le \frac{x^2y^2}{2|xy|}=\frac{1}{2}|xy|$$</span> and this tends to zero if <span class="math-container">$x,y$</span> tend to zero.</p>
|
200,931 |
<p>I want to generate a layered drawing of the <a href="https://en.wikipedia.org/wiki/Hoffman%E2%80%93Singleton_graph" rel="noreferrer">Hoffman–Singlelton graph</a>. As an example of what I want, here is a layered drawing of the Petersen graph:</p>
<p><a href="https://i.stack.imgur.com/doEt5.png" rel="noreferrer"><img src="https://i.stack.imgur.com/doEt5.png" alt="enter image description here"></a></p>
<p>Now if I right click on the output of <code>PetersenGraph[]</code> and do Graph Layout -> Layered, drawing, I get this:</p>
<p><a href="https://i.stack.imgur.com/B4xUM.png" rel="noreferrer"><img src="https://i.stack.imgur.com/B4xUM.png" alt="enter image description here"></a></p>
<p>Clearly a lot of the important visual information at the end layer is lost because the edges all overlap. Is there a way to recreate something similar to the the top image, where the edges at the last layer are visible?</p>
<p>My actual goal is not to do this with the Petersen, but with the Hoffman–Singleton (in Mathematica, <code>FromEntity[Entity["Graph", "HoffmanSingletonGraph"]]</code>). Needless to say, I got a similar output for this graph:</p>
<p><a href="https://i.stack.imgur.com/imLVp.png" rel="noreferrer"><img src="https://i.stack.imgur.com/imLVp.png" alt="enter image description here"></a></p>
<p>I appreciate any assistance with this.</p>
|
kglr
| 125 |
<ol>
<li>Rescale the vertex coordinates given by <code>"LayeredEmbedding"</code> to
run form 0 to 1 in each dimension,</li>
<li><code>Pick</code> the edges with vertices in the right-most layer by checking if
both vertices have first coordinate equal to <code>1.</code>,</li>
<li>Use a slightly modified version of the built-in edge shape function
<code>"CurvedArc"</code> as the <code>EdgeShapeFunction</code> for the right-most edges.</li>
</ol>
<h3> </h3>
<pre><code>ClearAll[eSF, toLayered, layeredG]
eSF[curv_: 1] := GraphElementData[{"CurvedArc", "Curvature" -> curv}][
SortBy[-Last @ # &] @ #[[{1, -1}]], ##2] &;
toLayered = Module[{el = EdgeList[#], g, re,
vcoords = Round[#, .001] & @ Transpose[Rescale /@
Transpose[GraphEmbedding[#, {"LayeredEmbedding", "Orientation" -> Left}]]]},
g = SetProperty[#, VertexCoordinates -> vcoords];
re = Pick[el, First[PropertyValue[{g, #}, VertexCoordinates]]& /@ Apply[List][#] ==
{1., 1.}& /@ el];
SetProperty[g, { EdgeShapeFunction -> {Alternatives @@ re -> eSF[]}}]] &;
</code></pre>
<p>We compose <code>toLayered</code> with <code>Graph</code> to get a function that takes the same arguments and options as <code>Graph</code>:</p>
<pre><code>layeredG = toLayered @* Graph;
</code></pre>
<p><strong>Examples:</strong></p>
<pre><code>layeredG[PetersenGraph[], EdgeStyle -> Black,
VertexStyle -> Directive[White, EdgeForm[Black]], VertexSize -> .3]
</code></pre>
<p><a href="https://i.stack.imgur.com/U4Dcw.png" rel="noreferrer"><img src="https://i.stack.imgur.com/U4Dcw.png" alt="enter image description here"></a></p>
<pre><code>hsg = FromEntity[Entity["Graph", "HoffmanSingletonGraph"]];
layeredG[hsg, ImageSize -> 600, VertexSize -> .8,
VertexStyle -> White, EdgeStyle -> Gray]
</code></pre>
<p><a href="https://i.stack.imgur.com/NPyRc.png" rel="noreferrer"><img src="https://i.stack.imgur.com/NPyRc.png" alt="enter image description here"></a></p>
|
351,642 |
<p>So I'm proving that a group $G$ with order $112=2^4 \cdot 7$ is not simple. And I'm trying to do this in extreme detail :) </p>
<p>So, assume simple and reach contradiction. I've reached the point where I can conclude that $n_7=8$ and $n_2=7$. </p>
<p>I let $P, Q\in \mathrm{Syl}_2(G)$ and now dealing with cases that $|P\cap Q|=1, 2^2, 2^3$ or $2^4$. </p>
<p>I easily find contradiction when $|P\cap Q|=2^4$ and $2$. </p>
<p>Um, got stuck REAL bad on the case $|P\cap Q|=2^3$ and $2^2$. </p>
<p>If $|P \cap Q |=2^3= 8$ and $|P|=|Q|=16$, is there any relationship between $P,Q$ and their intersection that can help me? </p>
|
Dalimil Mazáč
| 59,757 |
<p>Sylow's theorems require that $n_2=1$ or 7, $n_7=1$ or 8, so $G$ has a chance to be simple only if $n_2=7$ and $n_7=8$. Note that the Sylow 7-subgroups can only intersect at the identity, any two Sylow 2-subgroups can share a subgroup of order at most 8, and a Sylow 7-subgroup and a Sylow 2-subgroup can share only the identity.</p>
<p>Hence the union of Sylow 7-subgroups has $1+8\cdot(7-1)=49$ elements, and the union of the Sylow 2-subgroups has at least $8+7\cdot(16-8)=64$ elements, which happens precisely when they all share a subgroup $H$ of order 8. In this case, the union of all Sylow 7-subgroups and 2-subgroups has $64+49-1=112$ elements, so we learn that no other scenario (one with a greater union of the Sylow 2-subgroups) is allowed.</p>
<p>Now notice that $H$ is a normal subgroup of $G$ since conjugation by $g\in G$ permutes the Sylow 2-subgroups and so preserves their intersection. Hence $G$ is not simple.</p>
<p><strong>Edit:</strong> This answer is wrong because the union of the Sylow 2-sbgps can be smaller than 112, see the comments below.</p>
|
625,821 |
<p>$$\int^\infty_0\frac{1}{x^3+1}\,dx$$</p>
<p>The answer is $\frac{2\pi}{3\sqrt{3}}$.</p>
<p>How can I evaluate this integral?</p>
|
lsp
| 64,509 |
<p>$$x^3+1 = (x+1)(x^2-x+1)$$
<strong>Logic:</strong> Do partial fraction decomposition.Find $A,B,C$.</p>
<p>$$\frac{1}{x^3+1} = \frac{A}{x+1}+\frac{Bx+C}{x^2-x+1}$$
By comparing corresponding co-efficients of different powers of $x$, you will end up with equations in A,B,C.After solving you get :
$$A=\frac{1}{3},B=\frac{-1}{3},C=\frac{2}{3}$$
Then use this:
$$\int\frac{1}{x}\,dx=\log x+c$$</p>
|
137,755 |
<p>Suppose that $X$ is a scheme and $x\in X$ is a point. The stalk of $X$ at $x$ is a (local) ring and we can form its spectrum $Y_x=\rm{Spec}(\mathcal{O}_{X,x})$.</p>
<p>There is a canonical map $Y_x\to X$. We can define it by fixing an affine neighborhood $x\in U\cong \rm{Spec}(R)$, making $x$ as a prime ideal in $R$ and $\mathcal{O}_{X,x}\cong R_x$ a localization. The quotient $R\to R_x$ then induces the map of schemes $Y_x\to U \subseteq X$.</p>
<p>My question is this: is there a name for this construction? Are there familiar methods or theorems where it arises?</p>
|
Matthieu Romagny
| 17,988 |
<p>In EGA1, 2.4 this is called the <em>local scheme of $X$ at $x$</em>.</p>
|
219,014 |
<p>I have the list</p>
<pre><code>t1 = {{-1, 0}, {-2, 0}, {-3, 0}, {0, 0}, {-2, 0}, {1, 1}}
</code></pre>
<p>How do I find the position where an element repeats? In this case it would be element <code>{-2, 0}</code> at position 5, because <code>{-2, 0}</code> first came up at postion 2. So the answer would be 5.</p>
<p>I made it to comparisons with the first element but don't know how to procede.</p>
|
MikeY
| 47,314 |
<p>This sort of question comes up, where the range of <code>K</code> is open-ended. If you are dealing with a generalized <code>g</code> function where you don't already know the answer, Mathematica won't find it for you straightaway (hopefully someone will correct me on this). A method I use is to generate answers with increasing <code>K</code> and look for patterns using the old Mark I eyeball induction process.</p>
<pre><code>gg := Log[1 + Sum[Log[1 + x[k]], {k, 1, nn}]];
Table[
tab = Table[x[i] \[Distributed] BernoulliDistribution[p[i]], {i, nn}];
Expectation[gg, tab],
{nn, 1, 2}] // TableForm
</code></pre>
<p><span class="math-container">$$
\left(
\begin{array}{l}
p(1) \log (1+\log (2)) \\
p(2) \log (1+\log (2))+p(1) (p(2) (\log (1+\log (4))-2 \log (1+\log (2)))+\log (1+\log (2))) \\
\end{array}
\right)$$</span></p>
|
1,238,292 |
<p>This is a homework problem, so please do not give more than hints. I must convert
\begin{align}
\int_0^\sqrt{2}\int_x^\sqrt{4-x^2}\sin\left(x^2+y^2\right)\:dy\:dx\tag{1}
\end{align}
to polar coordinates. This is my attempt:
\begin{align}
\int_{\pi/4}^{\pi/2}\int_{\color{red}{2\cos\left(\theta\right)}}^{\color{red}{2\sin\left(\theta\right)}}\sin\left(r^2\right)r\:dr\:d\theta,\tag{2}
\end{align}
but I am unsure about the $\color{red}{\text{red}}$ limits, because while I am solving I end up at
\begin{align}
\int_{\pi/4}^{\pi/2}\frac{\cos\left(4\cos^2\left(\theta\right)\right)}{2}-\frac{\cos\left(4\sin^2\left(\theta\right)\right)}{2}\:d\theta\tag{3}
\end{align}
after a single round of $u$-substitution. There's no way it should end up here, unless it's really easy and I'm just not thinking...</p>
<p>I think the upper limit is $\color{red}{2\sin\left(\theta\right)}$ because a substitution of $2\cos\left(\theta\right)$ into $\sqrt{4-x^2}$ results in
\begin{align}
\sqrt{4-x^2}&=\sqrt{4-4\cos^2\left(\theta\right)}\\
&=2\sin\left(\theta\right),
\end{align}
and the lower limit is $\color{red}{2\cos\left(\theta\right)}$ by direct substitution as before.</p>
<p>Here is my $u$-substitution:</p>
<p>Let $\xi=r^2$, then $d\xi/2r=dr$, resulting in
\begin{align}
;\;\int r\sin\left(r^2\right)\:dr&=\frac{1}{2}\int \sin\left(\xi\right)\:d\xi\\
&=\frac{-\cos\left(\xi\right)}{2}=\frac{-\cos\left(r^2\right)}{2}\bigg|
\end{align}
Thus,
\begin{align}
\int_{\pi/4}^{\pi/2}\int_{2\cos\left(\theta\right)}^{2\sin\left(\theta\right)}r\sin\left(r^2\right)\:dr\:d\theta&=\int_{\pi/4}^{\pi/2}\left[\frac{-\cos\left(r^2\right)}{2}\right]_{2\cos\left(\theta\right)}^{2\sin\left(\theta\right)}\;d\theta.
\end{align}
Where have I gone wrong?</p>
|
E.H.E
| 187,799 |
<p>firstly, you must sketch the region
<img src="https://i.stack.imgur.com/rmYuQ.png" alt="enter image description here">
$$\int_{\pi /4}^{\pi /2}\int_{0}^{2}\sin(r^2)rdrd\theta $$</p>
|
2,312,968 |
<p>If $t=\ln(x)$, $y$ some function of $x$, and $\dfrac{dy}{dx}=e^{-t}\dfrac{dy}{dt}$, why would the second derivative of $y$ with respect to $x$ be:
$$-e^{-t}\frac{dt}{dx}\frac {dy}{dt} + e^{-t}\frac{d^2y}{dt^2}\frac{dt}{dx}?$$</p>
<p>I know this links into the chain rule. I don't have a good intuition for why the first term has $\dfrac{dt}{dx}\dfrac{dy}{dt}$ (although I strongly feel it's such that we can change the variable, since this question arose in the context of a second order differential equation where $y$ was differentiated in terms of $x$, but the equation was non linear, so we had to make it linear by substitution). Moreover, the proper problem that I would plead to be adressed is why the second term is differentiated in the way that it is. Basically, my question is: why is the differential of $\dfrac{dt}{dx}\dfrac{dy}{dt}$ with respect to $x$ given as $\dfrac{d^2y}{dt^2}\dfrac{dt}{dx}$. </p>
<p>Preferable if english to explain any mathematical derivations, but any of your personal time to help out is always much appreciated.</p>
|
hamam_Abdallah
| 369,188 |
<p><strong>hint</strong></p>
<p>from the graph, we derive that</p>
<p>$$f (0)=f (4\frac \pi 3)=0$$</p>
<p>thus</p>
<p>$$\sin (-k)+c=\sin (4\frac \pi 3-k)+c=0$$</p>
<p>from here,
$$-k=\pi-(4\frac \pi 3-k) $$</p>
<p>You can finish.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.