url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://en.wikibooks.org/wiki/Digital_Signal_Processing/Discrete_Data
|
# Digital Signal Processing/Discrete Data
This page may need to be reviewed for quality.
Continuous data is something that most people are familiar with. A quick analogy: when an analog data sensor (e.g., your eye) becomes active (you blink it open), it starts receiving input immediately (you see sun-shine), it converts the input (optical rays) to a desired output (optic nerve signals), and sends the data off to its destination (your brain). It does this without hesitation, and continues doing so until the sensor turns off (you blink your eyes closed). The output is often called a data "stream"; once started, it might run forever, unless something tells it to stop. Now, instead of a physical sensor, if we're able to define our data mathematically in terms of a continuous function, we can calculate our data value at any point along the data stream. It's important to realize that this provides the possibility of an infinite (∞) number of data points, no matter how small the interval might be between the start and stop limits of the data stream.
This brings us to the related concept of Discrete data. Discrete data is non-continuous, only existing at certain points along an input interval, and thereby giving us a finite number of data points to deal with. This data can also be defined by a mathematical function, but one that is limited and can only be evaluated at the discrete points of input. These are called "discrete functions" to distinguish them from the continuous variety.
Discrete functions and data give us the advantage of being able to deal with a finite number of data points.
## Sets and Series
Discrete data is displayed in sets as such:
```X[n] = [1 2 3& 4 5 6]
```
We will be using the "&" symbol to denote the data item that occurs at point zero. Now, by filling in values for n, we can select different values from our series:
```X[0] = 3
X[-2] = 1
X[3] = 6
```
We can move the zero point anywhere in the set that we want. It is also important to note that we can pad a series with zeros on either side, so long as we keep track of the zero-point:
```X[n] = [0 0 0 0 1 2 3& 4 5 6] = [1 2 3& 4 5 6]
```
In fact, we assume that any point in our series without an explicit value is equal to zero. So if we have the same set:
```X[n] = [1 2 3& 4 5 6]
```
We know that every value outside of our given range is zero:
```X[100] = 0
X[-100] = 0
```
## Two Types of Discrete Data Sets
Data can be discrete in magnitude, discrete in time, or both. Here are some examples:
Discrete in Time
Discrete-in-time values only exist at specific points in time. If we try to take the value of a discrete-in-time data set at a time point where there is no data, our result will be zero. The image below shows an example waveform that is discrete in time, having gain values defined only at certain points along the time line. Notice that while it is discrete in time values, the gain value of each sample is not limited or quantized, and it may take any magnitude value.
Discrete in Value
Discrete-in-value series can only exist at certain values of magnitude, no matter when we look at them. For instance, we might say that a certain computer device can only handle integers, and no decimals. The image below shows a signal that is discrete in magnitude, but is not discrete in time. While the waveform indeed has a gain value at every point in time, the values are quantized to specific magnitudes, such as only integers. The steps between the allowed values produce the "staircase" effect in the image. If we try to take the value at a time point exactly when the magnitude transition occurs, a quantization rule must be in place to deal with it, setting the value to the appropriate previous or subsequent magnitude.
## Stem Plots
Discrete data is frequently represented with a stem plot. Stem plots mark data points with dots, and draw a vertical line between the t-axis (the horizontal time axis) and the dot:
```F[n] = [5& 4 3 2 1]
```
## About the Notation
The notation we use to denote the zero point of a discrete set was chosen arbitrarily. Textbooks on the subject will frequently use arrows, periods, or underscores to denote the zero position of a set. Here are some examples:
``` |
v
Y[n] = [1 2 3 4 5]
```
``` .
Y[n] = [1 2 3 4 5]
```
``` _
Y[n] = [1 2 3 4 5]
```
All of these things are too tricky to write in wikibooks, so we have decided to use an ampersand (&) to denote the zeropoint. The ampersand is not used for any other purpose in this book, so hopefully we can avoid some confusion.
## Sampling
Sampling is the process of converting continuous data into discrete data. The sampling process takes a snapshot of the value of a given input signal, rounds if necessary (for discrete-in-value systems), and outputs the discrete data. A common example of a sampler is an Analog to Digital Converter (ADC).
Let's say we have a function based on time (t). We will call this continuous-time function f(t):
$f(t) = 2tu(t)$
Where u(t) is the unit step function. Now, if we want to sample this function, mathematically, we can plug in discrete values for t, and read the output. We will denote our sampled output as F[n]:
```F[n] = 0 : n < 0
F[1] = f(1) = 2
F[2] = f(2) = 4
F[100] = f(100) = 200
```
This means that our output series for F[n] is the following:
```F[n] = [0& 2 4 6 8 10 12 ...]
```
## Reconstruction
We digitize (sample) our signal, we do some magical digital signal processing on that signal, and then what do we do with the results? Frequently, we want to convert that digital result back into an analog signal. The problem is that the sampling process loses a lot of data. Specifically, all the data between the discrete data points is lost, and if the sampler rounds the value, then a small amount of error is built into the system that can never be recovered. A common example of a reconstructor is a Digital to Analog Converter (DAC).
### Interpolation
When converting a digital signal into an analog signal, frequently a process called Interpolation is used to make the analog version a more likely representation of the signal. Interpolation essentially "connects the dots" between discrete data points, and the reconstructor then outputs an analog waveform with the dots connected. We will show the process below:
We start out with an uninterpolated stem plot:
We connect the points in our stem plot with straight lines (dotted lines):
We draw new points on that line, mid way between our existing lines (dashed lines):
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9005133509635925, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/atoms?sort=unanswered&pagesize=15
|
# Tagged Questions
A nucleus made of protons and neutrons surrounded by a cloud of electrons equal in number to the protons.
learn more… | top users | synonyms
1answer
280 views
### Why can free lithium atoms not take part in an Auger process?
Shouldn't it be possible for an incoming photon to excite one of the 1s electrons to a 2p state (or one of even higher energy) and then for the excited electron to drop back to 1s and kick out the 2s ...
1answer
30 views
### Minimum atomic clearance permitting motion
Suppose you were to build the piston and cylinder in a car engine atom-by-atom. Let's just say carbon, since you can make a lot of different shapes due to it's high valence. So assuming you make the ...
0answers
100 views
### Is the translational information all that matters, or do we need to take into account internal states?
For anyone in this community that's familiar with quantum teleportation, I need desperate help. I am currently working on my senior thesis and my goal is to teleport a molecule. Background: So in ...
0answers
97 views
### Does this photon emission problem even make sense?
I came across this question in an introductory physics course awhile back and I never got over it: "A hydrogen atom has an electron in the n=5 orbit, what is the maximum number of photons that might ...
0answers
78 views
### why is the transition $3p^53d^2 \to 3p^63d^1$ (hydrogen atom) forbidden?
What I was thinking is that in 3d subshell (l=2) we have two electrons with $$m_l=-2$$ (spin up and down) and if we move to 3p we will fill the last vacant position - that is $$m_l=1$$ with spin down ...
0answers
28 views
### What methods exist for us to measure the position and momentum of atoms that make up molecules?
In reference to this paper, http://iopscience.iop.org/1355-5111/8/1/014, we are able to localize atoms using homodyne measurement. Would it be too naive to consider we can measure the position of ...
0answers
24 views
### How the radio-freqency magnetic field helps to record fluorescence signal
Suppose a sample of atoms (say rubidium), which is exposed to constant magnetic field, is irradiated with circularly polarized light so that the electrons are excited from lower S level (F=3) to P ...
0answers
18 views
### Degeneracy of orbitals in magenetic field
Why is that in an external magnetic field(uniform) the degeneracy of d,f orbitals is lost but the degeneracy of p orbitals remain intact assuming the main cause of losing degeneracy is the difference ...
0answers
44 views
### How large must the Quantum teleportation fidelity have to be in order for it to be useful?
This question relates and stems from my original question. Please read this one and the comments before answering this question. Quantum Teleportation Fidelity I know that for discrete variables ...
0answers
49 views
### Where to find probability density plots for all elements?
Does anyone know where I can find something similar to this, but for all elements? I would love to find something with the same image quality. Also, is there any software that can produce images ...
0answers
307 views
### Evaluating Transition probability between different states of Hydrogen atom
I am trying to evaluate the inner product $<2S_{\frac{1}{2}},F',F'_{3}|\delta^{3}(x)\sigma_{i}P_{i}|2P_{\frac{1}{2}},F,F_{3}>$ It's written in the form $<nl_{j},F,F_{3}|$ Where ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9287707805633545, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/35181/ward-takahashi-identities-from-z-invariance
|
# Ward Takahashi identities from Z invariance
I'm trying to get Ward-Takahashi identities using the approach used in Ryder's book (pages 263-266). I like that he starts from demanding gauge invariance of Z in a explicit way and them explores the consequences of that to functional generators of vertex functions. But the actual calculation is bugging me out.
The author seems oblivious to the fact that the fermionic fields and sources ($\psi$ and $\eta$) are Grassmann variables and keeps commuting them out with no regard for my sanity. For instance, equation (7.102) has a term:
$$i e (\bar{\eta}\psi-\bar{\psi}\eta)$$
that promply becomes (exchanging the fields by ${1\over i}$ times the derivatives on the sources, acting on Z to the right):
$$e (\bar{\eta}{\delta \over \delta \bar{\eta} }-\eta{\delta \over \delta \eta })$$
In my opinion that should be:
$$e (\bar{\eta}{\delta \over \delta \bar{\eta} }+\eta{\delta \over \delta \eta })$$
This one has no consequences because he commutes them again right after. But when I try to calculate it being careful with the Grassmann variables, I can never get the right signs in (7.111). I'm specially troubled by the derivatives below (this is what I'm getting, but one of them should have a different sign in order to get the right WT identities):
$${\delta \over \delta \bar{\psi}(x_1) } {\delta \over \delta \psi(y_1) } \left[{\delta \Gamma \over \delta \psi(x) }\psi(x)\right]_{\psi=\bar{\psi}=0}=-\delta^4(x-y_1){\delta^2 \Gamma \over \delta \bar{\psi}(x_1) \delta\psi(x) }$$
$${\delta \over \delta \bar{\psi}(x_1) } {\delta \over \delta \psi(y_1) } \left[\bar{\psi}(x) {\delta \Gamma \over \delta \bar{\psi}(x) } \right]_{\psi=\bar{\psi}=0} =\\=-\delta^4(x-x_1){\delta^2 \Gamma \over \delta \psi(y_1) \delta\bar{\psi}(x) } = \delta^4(x-x_1){\delta^2 \Gamma \over \delta \bar{\psi}(x) \delta\psi(y_1) }$$
Does anybody ever did this calculation in detail and has some pointers? Are there any other references that follow this same approach?
EDIT: Just a shameless bump: I still looking for some light on this. Any reference on where this is done in detail would help.
-
Ryder knows the answer and is making mistakes in intermediate steps. It is hard to answer without looking at the book. – Ron Maimon Sep 2 '12 at 22:02
Just be sure: so I'm right about this commutation business being all wrong? Also: is the link I posted to the book itself (on Google books) working? – Forever_a_Newcomer Sep 3 '12 at 0:40
Yes, the commutations are wrong. The link works. I just didn't rederive the WT identity. – Ron Maimon Sep 3 '12 at 2:24
## 2 Answers
Disclaimer: I don't have a copy of Ryder's book and don't know what conventions it uses.
But: it's not true that products of fermions always anticommute. For instance, suppose we are studying two-component spinors, $\psi_\alpha$, where $\alpha$ takes values 1 or 2. Now, it's true that $\psi_1$ is a Grassmann variable. But when taking the product of spinors, we usually mean something like $\psi \chi = \psi^\alpha \chi_\alpha$, where raising an index is defined as $\psi^\alpha = \epsilon^{\alpha \beta} \psi_\beta$. Here $\epsilon$ is the antisymmetric symbol, $\epsilon^{12} = -\epsilon^{21} = 1$. In other words:
$$\psi \chi = \psi_1 \chi_2 - \psi_2 \chi_1 = -\chi_2 \psi_1 + \chi_1 \psi_2 = \chi \psi$$
In the middle step, we used the Grassmann nature of the variables, and the other steps are just the definition of the spinor product. So, two-component spinors commute (when multiplication is the product defined using the $\epsilon$ symbol) even though their components are anticommuting variables.
-
– Forever_a_Newcomer Aug 30 '12 at 13:26
I remember going through this same section and having concerns about the signs. I think Ryder is probably wrong here.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.94040447473526, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/45150/an-issue-about-the-compactness-and-the-existence-of-ctcs
|
# An issue about the compactness and the existence of CTCs
There is a well known fact that a compact spacetime necessarily contains a closed timelike curve (CTC). Proof can be found in several books on GR (e.g. Hawking, Ellis, Proposition 6.4.2), and in essence goes like this:
The spacetime $M$ can be covered by open sets of the form $I^+(p)$, chronological future of the point $p \in M$ (note that a priori $p$ is not an element of the set $I^+(p)$, but such situation can happen in a presence of CTCs). Now, suppose that $M$ is compact. Then there is a finite subcover, say
$$\{ I^+(p_1), \dots, I^+(p_n) \}$$
The point $p_1$ is contained in $I^+(p_{k_1})$ for some $1 \le k_1 \le n$, the point $p_{k_1}$ is contained in $I^+(p_{k_2})$, and so on. Since this subcover is finite, eventually some point $p_{k_r}$ must belong to $I^+(p_{k_s})$, with $s \le r$. Then there is a future directed timelike curve going from $p_{k_r}$ to $p_{k_s}$ (since $s \le r$) and then from $p_{k_s}$ back to $p_{k_r}$ (since $p_{k_r} \in I^+(p_{k_s})$), which gives a closed timelike curve through $p_{k_r}$ (and $p_{k_s}$) in $M$. Q.E.D.
The question that bothers me is: what is implicitly assumed about the spacetime $M$, by saying that the family of sets of the form $I^+(p)$ is indeed a covering of $M$?
Take for example a flat spacetime with compact space part and finite time direction, e.g. $M = [0,1] \times T^3$, where $T^3$ is a (spacelike) 3-torus. This is a compact manifold (since it is a product of two compact manifolds) and there are no CTCs. The loophole in the argument above seems to be in the fact that the "initial points", $\{0\} \times T^3$, are not covered by any set of the form $I^+(p)$.
One can easily modify this example by contracting the initial and final spacelike slices to points ("Big Bang" and "Big Crunch"); the resulting spacetime is still compact and contains no CTCs.
Do these manifolds fail to be "regular spacetimes" for some reason?
-
3
$M=[0,1]XT^3$ is a manifold-with-boundary. Maybe this is the problem? – twistor59 Nov 26 '12 at 15:46
4
– Luboš Motl Nov 26 '12 at 16:02
## 1 Answer
What is said above in the comments is correct. Note that you can violate this theorem with essential curvature singularities, too. The "Minkowski Sphere" with line element $ds^{2} = -d\theta^{2} + \sin^{2}\theta \,d\phi^{2}$ contains no CTCs (but it does contain closed null curves), but all future-pointing timelike geodesics begin at the 'south pole' and end at the 'north pole'.
-
OK, so the original statement should be rephrased into "geodesically complete compact spacetime necessary contains a CTC". However, in classical GR we are typically dealing with singular (geodesically incomplete) spacetimes -- does this means that this theorem is of very little use? – Ivica Smolić Nov 26 '12 at 16:36
@IvicaSmolić: I would probably say so--after all, CTC are unphysical. If they were generic features of spacetimes, we'd have some explaining to do to show why we don't observe such phenomena. Also, we typically deal with noncompact spacetimes (The closed Robertson-Walker cosmological model being the most notable exception), which would make the theorem already not apply. – Jerry Schirmer Nov 26 '12 at 16:41
@IvicaSmolić, I don't think compact spacetimes are considered very often, may be because of this theorem. – MBN Nov 28 '12 at 15:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9544693231582642, "perplexity_flag": "head"}
|
http://crypto.stackexchange.com/questions/2680/ecdsa-point-order-criterion?answertab=oldest
|
# ECDSA - point order criterion
i am creating some primitive demostration for ECDSA over small curve ( p < 229). But my implementation have some weird issues. Verify process return false even if the signature is correct. Because I am testing it for some time, trying to find problem I discovered that verify work only if point order is prime.
So my question is is there some condition for point order? And what about point with non prime order? dont use? I read some papers about ecdsa but I didnt read anything about that.
Thanks
-
## 1 Answer
Without more details, I can't be certain what's going on with your implementation. However, here is one thing that can certainly cause it to fail sometimes if $n$ is composite: the ECDSA verifier needs to compute $s^{-1} \bmod n$ (where $s$ is effectively a random number between 1 and $n-1$). If $n$ is composite, then $s$ might not be relatively prime to $n$; if it is not, then $s^{-1}$ won't exist (because there's no number $t$ such that $s \cdot t = 1 \mod n$). If this happens during the verification, then either your modular inversion routine (which computes $s^{-1} \bmod n$) will fail, or it will claim to succeed, and return some incorrect value (which will cause later calculations to fail).
If the order of your generator is composite, you'll run into a second problem; it'll be easier to break. ECDSA can be broken if you can solve the 'discrete log' problem; that is, given the generator $G$ and a point $Q$, you can compute the value $d$ such that $Q = dG$. Now, if the order of the generator $G$ is composite (say, $pq$), then an attacker could compute $pG$, $qG$, $pQ$, $qQ$, and solve these two discrete log problems $pQ = d_p(pG)$ and $qQ = d_q(qG)$, and given the values $d_p, d_q$, reconstruct $d$. Solving these two discrete log problems would be considerably easier than solving the single discrete log problem (because the orders of the points involved are smaller); and hence we've just made things easier for the attacker.
Because generators with composite order can cause the verify to fail, and because they're cryptographically weaker, we always use generators with prime order.
-
Thanks for the answer. As I mentioned before, it is for education purpose so security is not really issue. So I can simple remove points with composite order, and use only with prime ones. I did not know if it could be correct "solution" :) – eXPi May 24 '12 at 16:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9318937659263611, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-algebra/172701-2-proofs-regarding-linear-transformations-operators-subspaces.html
|
# Thread:
1. ## 2 proofs regarding linear transformations and operators and subspaces
1.) Prove that $l^1(\mathbb{R}^w) = \{(\beta\subscript_1, \beta\subscript_2, \beta\subscript_3.......) \in \mathbb{R}^w : \sum\limits_{n = 0}^\infty |\beta\subscript_n| < \infty \}$ is a subspace of $\mathbb{R}^w$
2.) Prove that if $\phi: V -> W$ is a linear transformation, the the set $Null(\phi) = \{v \in V: \phi(v) = 0\}$ is a subspace of V.
Okay, so I really know very little about these things. (I'm also currently enrolled in a non proof base linear algebra class)
I know (see read) that W is a subspace of V iff:
1.) W is nonempty
2.) $\alpha, \beta \in \mathbb{R}$ and $w\subscript_1, w\subscript_2 \in W$ always implies $\alpha w\subscript_1 \oplus \beta w\subscript_2 \in W$
I also get the definition of a linear operator (but latex takes along time for me) I can try to type it out if it helps anyone.
Is the Null function significant here? I've never seen it. Any other hints or words of advice will be greatly appreciated. I hope the class goes back to differential equations soon, although I'm guessing this all will be significant.
Thanks!
2. You're definitely on the right track. As you said, just show that elements of $\ell^1$ are closed under addition and scalar multiplication. This will imply that zero is in there (as long as one element is there--then add it to itself multiplied by -1).
For the other one, just use the property of linearity: if f is linear, then f(a*x)=a*f(x) and f(x+y)=f(x)+f(y). That will show that the null set (usually called the kernel) is closed under addition and scalar multiplication. The kernel of a linear map is always a subspace.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9463714361190796, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/218735/using-newtons-method-for-three-dimensional-system-in-matlab
|
using Newton's method for three -dimensional system in matlab
Find the Jacobian matrix $J(x, y, z)$ for the functions
$f(x, y, z) = x^2 + y^2 + z^2 − 9 = 0,$
$g(x, y, z) = xyz − 1 = 0,$
$h(x, y, z) = x + y − z^2 = 0.$
Then using Newton’s method for nonlinear systems, starting with different initial guesses, find four solutions of the above nonlinear set of equations with the accuracy of four decimal places. To find the solutions write a Matlab script to do Newton iteration for this as a three-dimensional system.
-
3
What have you tried? I have no interest in this system, so I have no desire to find the Jacobian or solve it. However, I'd be more than happy to direct you along the right path if you enlighten me as to where you are having issues. – Arkamis Oct 22 '12 at 17:09
1 Answer
I think this can answer part 1 of your question: http://www.mathworks.nl/help/symbolic/jacobian.html
Of course it will require the Matlab symbolic toolbox.
-
I guess this is the best solution, strictly speaking, if the user is not at all interesting in knowing how to do the math and is using MATLAB... It's a bit of a shame though if this is the case... – rschwieb Oct 23 '12 at 12:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9162487983703613, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2009/08/28/subgroups-generated-by-shears/?like=1&source=post_flair&_wpnonce=a082172354
|
# The Unapologetic Mathematician
## Subgroups Generated by Shears
Okay, when I introduced elementary matrices I was a bit vague on the subgroup that the shears generate. I mean to partially rectify that now that we’ve got elementary row operations to work with.
I assert that the upper shears — those elementary matrices with one nonzero entry above the diagonal — generate the group of “upper unipotent” matrices. A matrix is unipotent if it is the identity plus a nilpotent transformation, so all of its eigenvalues are ${1}$. Specifically, we want those that are also upper-triangular. Thus the matrices we’re talking about have ${0}$ everywhere below the diagonal and all ${1}$ on the diagonal, as
$\displaystyle A=\begin{pmatrix}1&a_{1,2}&\cdots&&a_{1,n}\\{0}&1&&&\\\vdots&&\ddots&&\vdots\\&&&1&a_{n-1,n}\\{0}&&\cdots&0&1\end{pmatrix}$
What I’m going to do, now, is turn this matrix into the identity by using elementary row operations. Specifically, I’ll only ever add a multiple of one row to another row, which can be effected by multiplying the matrix on the left by shear matrices. And I’ll only add multiples of one row to rows above it, which can be effected by using upper shears.
So first, let’s add $-a_{n-1,n}$ times the $n$th row to the $n-1$th.
$\displaystyle\left(H_{n-1,n,-a_{n-1,n}}\right)A=\begin{pmatrix}1&a_{1,2}&&\cdots&&a_{1,n}\\{0}&1&&&&\\&&\ddots&&&\vdots\\\vdots&&&1&a_{n-2,n-1}&a_{n-2,n}\\&&&&1&0\\{0}&&&\cdots&0&1\end{pmatrix}$
We’ve cleared out the last entry in the next-to-last row in the matrix. Keep going, clearing out all the rest of the last column
$\displaystyle\left(H_{1,n,-a_{1,n}}\dots H_{n-1,n,-a_{n-1,n}}\right)A=\begin{pmatrix}1&a_{1,2}&&\cdots&&0\\{0}&1&&&&\\&&\ddots&&&\vdots\\\vdots&&&1&a_{n-2,n-1}&0\\&&&&1&0\\{0}&&&\cdots&0&1\end{pmatrix}$
Now we can use the next-to-last row — which has only a single nonzero entry left — to clear out the rest of the next-to-last column
$\displaystyle\begin{aligned}\left(\left(H_{1,n-1,-a_{1,n-1}}\dots H_{n-2,n-1,-a_{n-2,n-1}}\right)\left(H_{1,n,-a_{1,n}}\dots H_{n-1,n,-a_{n-1,n}}\right)\right)A&\\=\begin{pmatrix}1&a_{1,2}&&\cdots&&0\\{0}&1&&&&\\&&\ddots&&&\vdots\\\vdots&&&1&0&0\\&&&&1&0\\{0}&&&\cdots&0&1\end{pmatrix}&\end{aligned}$
Keep going, clearing out the columns from right to left
$\displaystyle\left(\left(H_{1,2,-a_{1,2}}\right)\dots\left(H_{1,n,-a_{1,n}}\dots H_{n-1,n,-a_{n-1,n}}\right)\right)A=\begin{pmatrix}1&0&&\cdots&&0\\{0}&1&&&&\\&&\ddots&&&\vdots\\\vdots&&&1&0&0\\&&&&1&0\\{0}&&&\cdots&0&1\end{pmatrix}$
and we’ve got the identity matrix! So that means that this big product of all these upper shears on the left is actually the inverse to the matrix $A$ that we started with. Now we just multiply the inverse of each of these shears together in reverse order to find
$\displaystyle A=\left(\left(H_{n-1,n,a_{n-1,n}}\dots H_{1,n,a_{1,n}}\right)\dots\left(H_{1,2,a_{1,2}}\right)\right)$
So any upper-unipotent matrix can be written as a product of upper shears. Similarly, any lower-unipotent matrix (all ${0}$ above the diagonal and all ${1}$ on the diagonal) can be written as a product of lower shears. If we add in scalings, we can adjust the diagonal entries too. Given an invertible upper-triangular matrix, first factor it into the product of a diagonal matrix and an upper-unipotent matrix by dividing each row by its diagonal entry. Then the diagonal part can be built from scalings, while the upper-unipotent part can be built from shears. And, of course, scalings and lower shears together generate the subgroup of invertible lower-triangular matrices.
### Like this:
Posted by John Armstrong | Algebra, Linear Algebra
## 1 Comment »
1. [...] Generate the Special Linear Group We established that if we restrict to upper shears we can generate all upper-unipotent matrices. On the other hand if we use all shears and scalings [...]
Pingback by | September 9, 2009 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 15, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8868585824966431, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?t=142297
|
Physics Forums
## Viscosity Woes :(
Hi,
I'm conducting a Physics Investigation where I'm changing the temperature of Motor Oil and seeing how the change in temperature affects viscosity of the oil. Im measuring the terminal velocity of the sphere in the oil (V). I'm Using Stokes' Law Equation: F=6piRNV. I've re-arranged this equation to: n = F/6piRV. I want to calculate the coefficient of viscosity directly, however I can't understand how to calculate the frictional force (F). Can anybody shed some light as to how I calculate this?
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
i have never had to work out a problem as such, but in general for drag force you have F = -Cv or F = -Dv^2 where you use the first for very small speeds and the second for large speeds. C and D are constants that depend on the shape of the object and the viscosity of the material
Recognitions: Gold Member Homework Help Science Advisor Well, one way of doing this is the following: If the sphere is falling through the oil at a uniform rate, then the OTHER forces acting upon the sphere must balance the force of friction. Assuming hydrostatic pressure distribution (which, at the very least, ought to require that the dimensions of the falling sphere is a lot less than the fluid volume), we ,may calculate the buoyancy force $F_{b}=\rho_{fluid}V_{sphere}g$acting upon the sphere. In addition, you'll have gravity working $$F_{g}=\rho_{sphere}V_{sphere}g$$. where the indiced V is the volume of the sphere, and the rho's the densities Thus, the force of friction will need to balance these to forces, which means that your viscosity should be calculable from: $$\eta=\frac{\rho_{s}-\rho_{f})V_{s}g}{6\pi{r}V}=\frac{2}{9}\frac{(\rho_{s}-\rho_{f})r^{2}}{V}$$ Note that the assumption that we have hydrostatic pressure can only be held if the given equation gives CONSISTENT values for $\eta$ for a large variety of test spheres. If we do not get consistent $\eta$-values, the most likely explanation is that we cannot neglect the velocity-induced changes in the pressure profile of the fluid. Thus, if you do this experiment, you might find that using a falling sphere through a viscous fluid is not a particularly good way to determine the viscosity of the fluid..
Thread Tools
| | | |
|----------------------------------------|------------------------|---------|
| Similar Threads for: Viscosity Woes :( | | |
| Thread | Forum | Replies |
| | Mechanical Engineering | 4 |
| | Nuclear Engineering | 5 |
| | Current Events | 0 |
| | Academic Guidance | 7 |
| | Academic Guidance | 18 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9267873764038086, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/88033/existing-proofs-of-rokhlins-theorem-for-pl-manifolds/88051
|
## Existing proofs of Rokhlin’s theorem for PL manifolds
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I'm looking for a comprehensive reference to existing proofs of Rokhlin's theorem that a 4-dimensional closed spin PL manifold has signature divisible by 16. I'm specifically interested in direct proofs (if any such exist) which do not rely on the fact that $\pi_i(PL/O)=0$ for small $i$.
The most commonly cited reference seems to be the book by Kirby "The Topology of 4-manifolds". But the proof there is for smooth manifolds and I'm not sure why it works for PL manifolds although I've seen it claimed in various places that it does. The same is said about Rokhlin's original proof but I don't know why that's true either. I would also like to know if other proofs for PL manifolds exist. I'm particularly interested to know if there is a PL proof based on the Atiyah-Singer index theorem.
-
1
Try section 1.5 of Mandelbaum's survey "Four-dimensional topology" projecteuclid.org/DPubS/Repository/1.0/…. It gives a sketch for PL manifolds with trivial first homology group. The proof does not use Atiyah-Singer theorem though. By the way, your link asks for mathscinet subscription and since I connect to mathscinet through library proxy I cannot use the link, and cannot even guess what paper it points to. – Igor Belegradek Feb 9 2012 at 22:35
It should be Section 1.4. (I own Russian edition where it is 1.5). – Igor Belegradek Feb 9 2012 at 22:37
Thanks, Igor. that paper does look interesting but after looking at it briefly it seems to suffer from the same problems that I found in other proofs that I've seen. That is it uses some results which were only proved in smooth case in PL category (such as that connected sum stabilization eventually turns PL h-cobordant manifolds into PL difeeomorphic ones). Also, to be clear, I don't insist on an Atiyah-Singer index theorem proof. but if there is one, I'd like to see it. Lastly, the link in my post was to Kirby's book that I mentioned. It just got mangled in formatting. I'll try to fix it. – Vitali Kapovitch Feb 10 2012 at 0:23
## 2 Answers
Another approach to the theorem that could probably be rewritten to work in the PL category is the approach of Kirby and Melvin in Appendix C of the following paper:
MR1117149 (92e:57011) Kirby, Robion(1-CA); Melvin, Paul(1-BRYN) The 3-manifold invariants of Witten and Reshetikhin-Turaev for sl(2,C). Invent. Math. 105 (1991), no. 3, 473–545.
See Corollary C6.
The idea of this approach is as follows. There is a famous $\mathbb{Z}/2$-invariant of homology $3$-spheres called the Rokhlin invariant. The usual definition of this invariant is as follows. Letting $M^3$ be a homology $3$-sphere, there exists a compact spin $4$-manifold $W^4$ with $\partial W^4 = M^3$. Let $\sigma$ be the signature of $W^4$. Rokhlin's theorem implies that modulo $16$, the value of $\sigma$ is independent of $W^4$. Since $\sigma$ is divisible by $8$ for number-theoretic reasons (namely, van der Blij's lemma about quadratic forms), the value of $\sigma/8$ is well-defined modulo $2$.
Using the Kirby calculus, Kirby and Melvin give a $3$-dimensional'' construction of the Rokhlin invariant, avoiding all mention of $4$-manifolds. They then go backwards and use this to prove Rokhlin's theorem about $4$-manifolds.
Looking at their proof, Kirby and Melvin use smoothness in two ways. The first is to prove that the Rokhlin invariant is well-defined. But this is harmless since (by work of Moise) all PL $3$-manifolds can be smoothed in a unique way. The second use of smoothness is to obtain a handlebody decomposition of the $4$-manifold. But this should be easier in the PL category!
-
Thanks, I'll certainly take a look at that paper. I really want to see a clean PL proof that makes it clear where exactly the PL structure is used and why the proof fails in the TOP category. From what you are saying it semms that in the approach you describe this happens on the handlebody decomposition step. It's still far from clear to me though because I think topological handlebody decomposition always exists by Freedman, isn't this right? – Vitali Kapovitch Feb 10 2012 at 4:00
3
@Vitali Kapovich : Not always. In fact, if a 4-manifold has a handle decomposition, then it is smoothable. The point is that the attaching maps are homeomorphisms of $3$-manifolds onto their images, and such maps are always isotopic to smooth maps. There's a brief discussion of this in Chapter 9.2 of Freedman and Quinn's book. – Andy Putman Feb 10 2012 at 4:20
1
@Andy Putman: that's a very nice observation! I didn't realize that. Of course this being the case my original question becomes somewhat moot: once one proves that a PL 4-manifold has a handlebody decomposition (which I think is obvious) it is then smoothable and hence the smooth Rokhlin's theorem applies. – Vitali Kapovitch Feb 10 2012 at 4:52
@Vitali : Yes, that's one way to do it. I wasn't sure what you wanted in your question -- I had assumed that you wanted a PL proof for aesthetic reasons or something. But if all you care about is the correctness of the result, then this is all you need. – Andy Putman Feb 10 2012 at 5:02
2
@Vitali Kapovitch : Isn't the handlebody observation I described essentially equivalent to PL=Diff in dimension 4 (the existence part, not the uniqueness part; but all you need is existence). – Andy Putman Feb 10 2012 at 5:47
show 2 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
There is a proof which uses quantum invariants. Since these invariants are typically defined using state-sums and are combinatorial in nature, I suppose that they work in the PL setting. A nice introduction is Justin Roberts' PhD thesis where Rohlin's theorem is proved as Corollary 5.14 at page 55.
-
Thanks. I must say that I'm completely unfamiliar with that approach. Is it discussed anywhere that it all really works in PL category? Roberts doesn't mention the issue at all as far as I can tell. – Vitali Kapovitch Feb 10 2012 at 0:39
1
I don't think that this issue is discussed. They define invariants on PL objects (triangulations or handle decompositions), then they prove that they are invariant under the corresponding moves (Pachner moves? and Kirby moves), then they show that the signature and w_2 can be computed from these invariants, and the algebra of the invariants show that 16 divides the signature when w_2=0. – Bruno Martelli Feb 10 2012 at 2:04
I mean, Roberts work with smooth manifolds but probably PL is enough, but one needs to check that carefully. – Bruno Martelli Feb 10 2012 at 2:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.944140613079071, "perplexity_flag": "middle"}
|
http://mathematica.stackexchange.com/questions/15089/how-can-i-create-a-custom-control-which-evaluates-to-a-list-of-values-or-replace/15096
|
# How can I create a custom control which evaluates to a list of values or replacements?
I have a set of variables which are used in various places in my calculations (solving a system, initial conditions, etc.). In order to make this easier to deal with, I want to make a control-like thing which makes them easier to manipulate, rather than just using long lists of unlabeled values such as
````calculateValues[{0, 0, π, 0.5, 3}]
````
So far I have something like this:
This clearly doesn't work, and I expect in order to get it working properly I'll need to use `Interpretation` or more `Dynamic` incantations, but I can't figure out exactly what needs to happen. Any advice would be appreciated. Thanks in advance!
-
## 3 Answers
You could do
````myControl[varNames_List, start_: 0] :=
myControl[varNames, ConstantArray[start, Length@varNames]]
myControl[varNames_List, start_List] :=
Interpretation[{vars = start},
Panel[Grid[{varNames,
Array[InputField[Dynamic@vars[[#]],
FieldSize -> {{0, Infinity}, 1}] &, Length@vars]}]], vars]
````
Now you can run that function to create a control `myControl["a"~CharacterRange~"g"]`. Each instance will have its own values and they all evaluate to their values when supplied as input
-
This works great, thanks! But what is the purpose of the `DynamicSetting` and `Evaluate`? It seems that if I remove them, it works just the same way. Also, I would expect to need `Dynamic[vars]` instead of `vars` at the end... – jtbandes Nov 23 '12 at 20:10
@jtbandes, without the `DynamicSetting` it shouldn't work as I thought you wanted. Try running `myControl["a"~CharacterRange~"g"]` and then adding `+3` to the output in the output cell with and without the `DynamicSetting` – Rojo Nov 23 '12 at 21:14
The `Evaluate` is probably useless, but since `Interpretation` has attribute `HoldAll` and I don't know why it's not simply `HoldRest`, I wanted to make sure it evaluated the first argument. However, it seems to evaluate it anyway – Rojo Nov 23 '12 at 21:21
The second argument of `Interpretation` needs to be what you want the kernel to receive when you use that control as input. You want the values of `vars`, and not a visual GUI that shows the dynamically updated value of `vars`. Always remember that `Dynamic` is only for SHOWING updated output (or in a control such as `Slider[Dynamic@..]` "as a notation" for input in controls). – Rojo Nov 23 '12 at 21:25
1
@jtbandes, I just passed through the docs of `Interpretation` and saw that it can take an extra argument, specifying the local variables, and using it expands to the same code that I had used before: DynamicSetting@DynamicModule[..., but is way neater. Edited – Rojo Nov 25 '12 at 5:17
show 1 more comment
You could use interpretation.
Here's the idea.
````SetAttributes[makeDynamicPanel, HoldFirst]
makeDynamicPanel[x_, defaultValue_] :=
Module[{y},
y =
Interpretation[
Panel@InputField[Dynamic[x]]
,
x
];
x = defaultValue;
y
]
````
Then you do for example
````makeDynamicPanel[z, {0, 0, π, 0.5, 3}]
````
and you get a panel that allows to edit the value of z and can be used as argument to your function, you can even do for example panel + 2 and get a valid value, it's quite practical.
-
I would suggest using another list (`vals`) to store values for the variables `vars`. To set up a list of controllers (`InputField`s here) programmatically, some clever construct is required. Here I used an explicit `Function` call to save the changed value of the $i^{th}$ element of vals inside the `Dynamic` (as the `#` stands for the index of the list element, not the actual controller value, which is denoted as `$x`), but this can also be achieved using `With`.
````calculateValues[v_List] := Plus @@ (v^Range@Length@v);
vars = {a, b, c, d, e};
vals = {1, 2, 3, 4, 5};
Panel[
Grid[{
Text /@ vars,
InputField[Dynamic[vals[[#]], Function[{$x}, vals[[#]] = $x]],
FieldSize -> {{0, Infinity}, 1}] & /@ Range@Length@vals
}]]
````
````Dynamic@vals
````
````{1, 2, 3, 4, 5}
````
````calculateValues@vars
````
````a + b^2 + c^3 + d^4 + e^5
````
````Dynamic[calculateValues@vars /. Thread[vars -> vals]]
````
````3413
````
-
lang-mma
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8708423376083374, "perplexity_flag": "middle"}
|
http://terrytao.wordpress.com/tag/connectedness/
|
What’s new
Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao
# Tag Archive
You are currently browsing the tag archive for the ‘connectedness’ tag.
## Is there a countable certificate for connectedness?
13 April, 2010 in math.GN, math.LO, question | Tags: certificates, connectedness, path-connectedness | by Terence Tao | 19 comments
In topology, a non-empty set ${E}$ is said to be connected if cannot be decomposed into two nontrivial subsets that are both closed and open relative to ${E}$, and path connected if any two points ${x,y}$ in ${E}$ can be connected by a path (i.e. there exists a continuous map ${\gamma: [0,1] \rightarrow E}$ with ${\gamma(0)=x}$ and ${\gamma(1)=y}$).
Path-connected sets are always connected, but the converse is not true, even in the model case of compact subsets of a Euclidean space. The classic counterexample is the set
$\displaystyle E := \{ (0,y): -1 \leq y \leq 1 \} \cup \{ (x, \sin(1/x)): 0 < x \leq 1 \}, \ \ \ \ \ (1)$
which is connected but not path-connected (there is no continuous path from ${(0,1)}$ to ${(1,\sin(1))}$).
Looking at the definitions of the two concepts, one notices a difference: the notion of path-connectedness is somehow a “positive” one, in the sense that a path-connected set can produce the existence of something (a path connecting two points ${x}$ and ${y}$) for a given type of input (in this case, a pair of points ${x, y}$). On the other hand, the notion of connectedness is a “negative” one, in that it asserts the non-existence of something (a non-trivial partition into clopen sets). To put it another way, it is relative easy to convince someone that a set is path-connected (by providing a connecting path for every pair of points) or is disconnected (by providing a non-trivial partition into clopen sets) but if a set is not path-connected, or is connected, how can one easily convince someone of this fact? To put it yet another way: is there a reasonable certificate for connectedness (or for path-disconnectedness)?
In the case of connectedness for compact subsets ${E}$ of Euclidean space, there is an answer as follows. If ${\epsilon > 0}$, let us call two points ${x, y}$ in ${E}$ ${\epsilon}$-connected if one can find a finite sequence ${x_0 = x, x_1, \ldots, x_N = y}$ of points in ${E}$, such that ${|x_{i+1}-x_i| < \epsilon}$ for all ${0 \leq i < N}$; informally, one can jump from ${x}$ to ${y}$ in ${E}$ using jumps of length at most ${\epsilon}$. Let us call ${x_0,\ldots,x_N}$ an ${\epsilon}$-discrete path.
Proposition 1 (Connectedness certificate for compact subsets of Euclidean space) Let ${E \subset {\bf R}^d}$ be compact and non-empty. Then ${E}$ is connected if and only if every pair of points in ${E}$ is ${\epsilon}$-connected for every ${\epsilon > 0}$.
Proof: Suppose first that ${E}$ is disconnected, then ${E}$ can be partitioned into two non-empty closed subsets ${F, G}$. Since ${E}$ is compact, ${F, G}$ are compact also, and so they are separated by some non-zero distance ${\epsilon > 0}$. But then it is clear that points in ${F}$ cannot be ${\epsilon}$-connected to points in ${G}$, and the claim follows.
Conversely, suppose that there is a pair of points ${x,y}$ in ${E}$ and an ${\epsilon > 0}$ such that ${x,y}$ are not ${\epsilon}$-connected. Let ${F}$ be the set of all points in ${E}$ that are ${\epsilon}$-connected to ${x}$. It is easy to check that ${F}$ is open, closed, and a proper subset of ${E}$; thus ${E}$ is disconnected. $\Box$
We remark that the above proposition in fact works for any compact metric space. It is instructive to see how the points ${(1,\sin(1))}$ and ${(0,1)}$ are ${\epsilon}$-connected in the set (1); the ${\epsilon}$-discrete path follows the graph of ${\sin(1/x)}$ backwards until one gets sufficiently close to the ${y}$-axis, at which point one “jumps” across to the ${y}$-axis to eventually reach ${(0,1)}$.
It is also interesting to contrast the above proposition with path connectedness. Clearly, if two points ${x, y}$ are connected by a path, then they are ${\epsilon}$-connected for every ${\epsilon > 0}$ (because every continuous map ${\gamma: [0,1] \rightarrow E}$ is uniformly continuous); but from the analysis of the example (1) we see that the converse is not true. Roughly speaking, the various ${\epsilon}$-discrete paths from ${x}$ to ${y}$ have to be “compatible” with each other in some sense in order to synthesise a continuous path from ${x}$ to ${y}$ in the limit (we will not make this precise here).
But this leaves two (somewhat imprecise) questions, which I do not know how to satisfactorily answer:
Question 1: Is there a good certificate for path disconnectedness, say for compact subsets ${E}$ of ${{\bf R}^d}$? One can construct lousy certificates, for instance one could look at all continuous paths in ${{\bf R}^d}$ joining two particular points ${x, y}$ in ${E}$, and verify that each one of them leaves ${E}$ at some point. But this is an “uncountable” certificate – it requires one to check an uncountable number of paths. In contrast, the certificate in Proposition 1 is basically a countable one (if one describes a compact set ${E}$ by describing a family of ${\epsilon}$-nets for a countable sequence of ${\epsilon}$ tending to zero). (Very roughly speaking, I would like a certificate that can somehow be “verified in countable time” in a suitable oracle model, as discussed in my previous post, though I have not tried to make this vague specification more rigorous.)
It is tempting to look at the equivalence classes of ${E}$ given by the relation of being connected by a path, but these classes need not be closed (as one can see with the example (1)) and it is not obvious to me how to certify that two such classes are not path-connected to each other.
Question 2: Is there a good certificate for connectedness for closed but unbounded closed subsets of ${{\bf R}^d}$? Proposition 1 fails in this case; consider for instance the set
$\displaystyle E := \{ (x,0): x \in {\bf R} \} \cup \{ (x,\frac{1}{x}): x > 0 \}. \ \ \ \ \ (2)$
Any pair of points ${x,y \in E}$ is ${\epsilon}$-connected for every ${\epsilon > 0}$, and yet this set is disconnected.
The problem here is that as ${\epsilon}$ gets smaller, the ${\epsilon}$-discrete paths connecting a pair of points such as ${(1,0)}$ and ${(1,1)}$ have diameter going to infinity. One natural guess is then to require a uniform bound on the diameter, i.e. that for any pair of points ${x, y}$, there exists an ${R>0}$ such that there is an ${\epsilon}$-discrete path from ${x}$ to ${y}$ of diameter at most ${R}$ for every ${\epsilon > 0}$. This does indeed force connectedness, but unfortunately not all connected sets have this property. Consider for instance the set
$\displaystyle E := \{ (x,y,0): x \in {\bf R}, y \in \pm 1 \} \cup \bigcup_{n=1}^\infty (E_n \cup F_n) \ \ \ \ \ (3)$
in ${{\bf R}^3}$, where
$\displaystyle E_n := \{ (x,y,0): \frac{x^2}{n^2} + \frac{y^2}{(1-1/n)^2} = 1\}$
is a rectangular ellipse centered at the origin with minor diameter endpoints ${(0,1/n-1), (0,1-1/n)}$ and major diameter endpoints ${(-n,0), (n,0)}$, and
$\displaystyle F_n := \{ (n,y,z): (y-1/2)^2+z^2=1/4 \}$
is a circle that connects the ${(n,0)}$ endpoint of ${E_n}$ to the point ${(n,1)}$ in ${E}$. One can check that ${E}$ is a closed connected set, but the ${\epsilon}$-discrete paths connecting ${(0,-1)}$ with ${(0,+1)}$ have unbounded diameter as ${\epsilon \rightarrow 0}$.
Currently, I do not have any real progress on Question 1. For Question 2, I can only obtain the following strange “second-order” criterion for connectedness, that involves an unspecified gauge function ${\delta}$:
Proposition 2 (Second-order connectedness certificate) Let ${E}$ be a closed non-empty subset of ${{\bf R}^d}$. Then the following are equivalent:
• ${E}$ is connected.
• For every monotone decreasing, strictly positive function ${\delta: {\bf R}^+ \rightarrow {\bf R}^+}$ and every ${x,y \in E}$, there exists a discrete path ${x_0=x,x_1,\ldots,x_N=y}$ in ${E}$ such that ${|x_{i+1}-x_i| < \delta(|x_i|)}$.
Proof: This is proven in almost the same way as Proposition 1. If ${E}$ can be disconnected into two non-trivial sets ${F, G}$, then one can find a monotone decreasing gauge function ${\delta: {\bf R}^+ \rightarrow {\bf R}^+}$ such that for each ball ${B_R := \{ x \in {\bf R}^d: |x| \leq R \}}$, ${F \cap B_R}$ and ${G}$ are separated by at least ${\delta(R)}$, and then there is no discrete path from ${F}$ to ${G}$ in ${E}$ obeying the condition ${|x_{i+1}-x_i| < \delta(|x_i|)}$.
Conversely, if there exists a gauge function ${\delta}$ and two points ${x,y \in E}$ which cannot be connected by a discrete path in ${E}$ that obeys the condition ${|x_{i+1}-x_i| < \delta(|x_i|)}$, then if one sets ${F}$ to be all the points that can be reached from ${x}$ in this manner, one easily verifies that ${F}$ and ${E \backslash F}$ disconnect ${E}$. $\Box$
It may be that this is somehow the “best” one can do, but I am not sure how to quantify this formally.
Anyway, I was curious if any of the readers here (particularly those with expertise in point-set topology or descriptive set theory) might be able to shed more light on these questions. (I also considered crossposting this to Math Overflow, but I think the question may be a bit too long (and vague) for that.)
(The original motivation for this question, by the way, stems from an attempt to use methods of topological group theory to attack questions in additive combinatorics, in the spirit of the paper of Hrushovski studied previously on this blog. The connection is rather indirect, though; I may discuss this more in a future post.)
### Recent Comments
Frank on Soft analysis, hard analysis,…
andrescaicedo on Soft analysis, hard analysis,…
Richard Palais on Pythagoras’ theorem
The Coffee Stains in… on Does one have to be a genius t…
Benoît Régent-Kloeck… on (Ingrid Daubechies) Planning f…
Luqing Ye on 245B, Notes 7: Well-ordered se…
Luqing Ye on 245B, Notes 7: Well-ordered se…
Arjun Jain on 245B, Notes 7: Well-ordered se…
%anchor_text% on Books
Luqing Ye on 245B, Notes 7: Well-ordered se…
Arjun Jain on 245B, Notes 7: Well-ordered se…
Luqing Ye on 245A, Notes 2: The Lebesgue…
Luqing Ye on 245A, Notes 2: The Lebesgue…
E.L. Wisty on Simons Lecture I: Structure an…
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 143, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9355823397636414, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/45796/why-is-the-canonical-momentum-for-the-dirac-equation-not-defined-in-terms-of-t/45797
|
Why is the “canonical momentum” for the Dirac equation not defined in terms of the “gauge covariant derivative”?
The canonical momentum is always used to add an EM field to the Schrödinger/Pauli/Dirac equations. Why does one not use the gauge covariant derivative? As far as I can see, the difference is a factor `i` in front of the vector potential. I know I'm combining two seemingly unrelated things, but they seem very similar, an the covariant form seems much "better" with respect to the inherent gauge freedom in the EM field. I can also see that with the canonical momentum form, the equations remain unchanged after an EM and a QM (phase) gauge transformation. Suffice to say my field theory knowledge is not that impressive.
-
2 Answers
The identification goes as follows:
$$\text{Kin. Mom.}~=~ \text{Can. Mom.} ~-~\text{Charge} \times \text{Gauge Pot.}$$
$$\updownarrow$$
$$m\hat{v}_{\mu} ~=~ \hat{p}_{\mu} - qA_{\mu}(\hat{x})$$
$$\updownarrow$$
$$\frac{\hbar}{i} D_{\mu} ~=~ \frac{\hbar}{i}\partial_{\mu} - qA_{\mu}(x)$$
$$\updownarrow$$
$$D_{\mu} ~=~ \partial_{\mu} -\frac{i}{\hbar} qA_{\mu}(x)$$
$$\updownarrow$$
$$\text{Cov. Der.}~=~ \text{Par. Der.} ~-~\frac{i}{\hbar}\text{Charge} \times \text{Gauge Pot.}$$
The imaginary unit $i$ is needed, e.g. because the derivative is an anti-hermitian operator (recall the usual integration-by-part proof), while the momentum is required to be a hermitian operator in quantum mechanics.
-
Wow, I knew this. I wasn't thinking straight today, gotta remember to not post questions when I have a fever. Thanks for the clear explanation! – rubenvb Dec 3 '12 at 18:19
– Qmechanic♦ Feb 26 at 21:47
(Qmechanic has already given the answer. However since i spent some time writing the answer below so i am anyway posting it)
Consider charged particle with charge $q$, and (nonzero) rest mass $m$ moving in a spacetime with coordinates $(x^0,x^1,...,x^{n-1})$. When there is no electromagnetic field then the action of particle is given as
$\tag {1} S=-mc\displaystyle\int \sqrt {\eta_{\mu\nu}\dot x^\mu(\lambda)\dot x^\nu(\lambda)}d\lambda$
Where $\lambda$ is the parameter along the trajectory $x^{\mu}(\lambda)$ of the particle and $\dot x^{\mu}$ means $\partial x^{\mu}(\lambda)/\partial\lambda$. Note that the action is Lorentz invariant. When there is a $U(1)$ gauge field $-iA_{\mu}dx^{\mu}$ then we can add one more Lorentz invariant term to this action to generalize it as :-
$\tag{2}S=-mc\displaystyle\int \sqrt {\eta_{\mu\nu}\dot x^\mu (\lambda)\dot x^\nu (\lambda)}d\lambda-(q/c)\displaystyle\int \eta_{\mu\nu}A^{\mu}\dot x^{\mu}(\lambda)d\lambda$
Now in order to proceed its convenient to work in a particular inertial frame, and look at things from the viewpoint of inertial observer corresponding to that frame. In such a frame we can take $x^0(\lambda)/c=t=\lambda$. Above integral was from some point $\lambda_{0}$ to some $\lambda_{1}$. Now it becomes an integral from $t_0=x^0(\lambda_0)/c$ to $t_1=x^0(\lambda_1)/c$ and can be written as
$\tag{3} S=-mc^2\displaystyle\int \sqrt {1-v^2/c^2}dt-\displaystyle\int (qA^{0}-\frac {q}{c} \sum_{i}v^{i}.A^{i}) dt$
So
$\tag{4} L=-mc^2\sqrt {1-v^2/c^2}- (qA^{0}-\displaystyle \frac {q}{c} \sum_{i}v^{i}.A^{i})$
canonical momentum corresponding to $x^i$ can now be obtained as partial derivative of $L$ wrt $v^i=dx^i/dt$ and is given as :-
$\tag{5} \pi_i=mv^i/\sqrt {1-v^2/c^2}+\displaystyle\frac{q}{c}A^{i}$
Thus, as Qmechanic has answered, canonical momentum corresponding to $i$ th coordinate is physical momentum along that coordinate plus a contribution from gauge potential.
Even without chosing a particular inertial frame we could find the canonical momentum $\pi_{\mu}$ corresponding to $x^\mu$ by taking the derivative of $L$ in its covariant form wrt $\dot x^\mu$. This would give -
$\tag{6}\pi_{\mu}=-mc\eta_{\mu\nu}\dot x^{\nu}/\sqrt {\eta_{\alpha\beta}\dot x^\alpha\dot x^\beta}-\displaystyle\frac{q}{c}A_{\mu}$
Now in classical mechanics above equation is nothing but a map from velocity space to phase space. Its only when we move to QM that we represent canonical momentums as derivatives wrt spatial coordinates. Again for convenience lets work in a particular inertial frame. Here momentum conjugate to $x^i$ is $\pi_i$ as given by the equation [5]. So as usual in QM we quantize by requiring the corresponding operators to satisfy
$\tag{7}[ X^i,\Pi_j]=i\delta^i_{j}\hbar$
We can represent this algebra on Hilbert space of functions on space $R^{n-1}$ (note that spacetime is $R^n$) by defining $X^i$ to be the operator which acts as multiplication by $x^i$, and $\Pi_i$ to be the operator which acts as the derivative $-i\hbar\partial/\partial x^i$. From equation [5] we see that
$$\tag{8}mechanical\: momentum\: operator=-i\hbar\partial/\partial x^i-\displaystyle\frac{q}{c}A^{i}$$
Or taking $-i\hbar$ common we get
$$\tag{9}mechanical\: momentum\: operator=-i\hbar(\partial/\partial x^i-i\displaystyle\frac{q}{c\hbar}A^{i})=-i\hbar D_i$$
Where $D_i=\partial/\partial x^i-i\displaystyle\frac{q}{c\hbar}A^{i}$
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 11, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9366027116775513, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/284144/rank-of-a-subgroup-of-a-free-abelian-group
|
# Rank of a Subgroup of a Free Abelian Group.
Question:
Let $F$ be a free abelian group with countable rank. Let $G \leq F$. Is the rank of $G$ countable?
I know that this is true when $F$ has a finite rank.
Thanks
-
This has come up before, at least in greater generality (infinite rank as opposed to specifically countable rank). First however a comment on terminology. Countable means countably infinite. If you want a word that means either finite or countably infinite, the best choice is denumerable. – hardmath Jan 22 at 12:21
1
@hardmath: you talk as though your use of countable is completely standard. It is not. Many mathematicians use it to mean finite or countably infinite. To me it seems very strange to say that a set with three elements is not countable. – Derek Holt Jan 22 at 21:24
@DerekHolt: I posted to solicit clarification because the use seemed ambiguous. – hardmath Jan 22 at 23:38
I tell my students that a set is countable if you can count the elements in it... Formally, $S$ is countable there exists a bijection from some subset of $\mathbb{N}$ to $S$. – user1729 Jan 23 at 10:37
## 4 Answers
Let $A$ be an abelian free group and $X$ be a free basis of $A$. Then there is a bijection between $A$ and $\mathbb{Z}^{(X)}$ (that is, the set of functions from $X$ to $\mathbb{Z}$ with finite support), so $|A|=|X|$ if $X$ is infinite.
Consequently, if the rank of $B \leq A$ is $\kappa$, $\kappa \leq |B| \leq |A|=|X|$ the rank of $A$.
-
I am a bit confused. I've always thought it's $\mathbb Z^{\oplus X} \cong A$, where $\mathbb Z^{\oplus X}$ is a set of functions from $X$ to $\mathbb Z$ with finite support. (How did you define "support"?) – Tunococ Jan 22 at 12:36
@Tunococ: You are completely right, I permuted $X$ and $\mathbb{Z}$. I corrected it, thank you. – Seirios Jan 22 at 14:29
## Did you find this question interesting? Try our newsletter
Sign up for our newsletter and get our top new questions delivered to your inbox (see an example).
email address
Every abelian group can be seen like a $\mathbb{Z}$-module and there is a theorem that claims you that if you have a free $R$-module $A$ and a $R$-submodule $B$ of $A$ then $B$ is free and the rank $B$ is minor that the rank of $A$ if $R$ is a PID. Therefore, since $\mathbb{Z}$ is a PID, your question is true.
-
Is there an easier explanation. As I am still studying the group theory chapter. – Amr Jan 22 at 11:23
Well, I'm a little low, but I think so. Maybe others can help you better than me, but let me think a second. – Diego Silvera Jan 22 at 11:28
Suppose $X = \{x_1, x_2, \ldots\} \subset F$ is a countable set that generates $F$. Let $M_n = \langle x_1, \ldots, x_n \rangle \cong \mathbb Z^n$. We see that $|M_n| = |\mathbb Z|$ for all $n$. Since $F = \bigcup_{n=1}^\infty M_n$, it follows that $|F| = |\mathbb Z|$.
-
The answer to the question as asked is, $G$ is not necessarily of countable rank.
The subgroup $G$ is necessarily free abelian (cf. F is a free abelian group on a set X , H⊆F is a free abelian group on Y, then |Y|≤|X| ), but the question seems to assume this fact and ask rather about the rank of $G$ being countable.
What can be said (see link) is that rank of $G$ is at most the rank of $F$. The latter is assumed countable, but the rank of $G$ could either be countable or finite (i.e. denumerable as explained in my comment). The simplest case is $G$ of rank $1$, that is the infinite cyclic subgroup generated by any nonidentity element of $F$.
-
I think the OP means finite or countably infinite by countable.(i.e. There is an injection from the set to N) – Amr Jan 22 at 14:28
@Amr: Perhaps. I put a comment on the question itself for clarification. In any case a couple of the answers point out that $F$ itself is a countable set, so any basis for $G$ is at most countable. – hardmath Jan 22 at 14:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 55, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9455890655517578, "perplexity_flag": "head"}
|
http://en.wikipedia.org/wiki/Kinematics
|
Kinematics
Classical mechanics
Branches
Formulations
Fundamental concepts
Core topics
Scientists
Kinematics is the branch of classical mechanics that describes the motion of points, bodies (objects) and systems of bodies (groups of objects) without consideration of the causes of motion.[1][2][3] The term is the English version of A.M. Ampère's cinématique,[4] which he constructed from the Greek κίνημα, kinema (movement, motion), derived from κινεῖν, kinein (to move).[5] [6]
The study of kinematics is often referred to as the geometry of motion.[7] (See analytical dynamics for more detail on usage).
To describe motion, kinematics studies the trajectories of points, lines and other geometric objects and their differential properties such as velocity and acceleration. Kinematics is used in astrophysics to describe the motion of celestial bodies and systems, and in mechanical engineering, robotics and biomechanics[8] to describe the motion of systems composed of joined parts (multi-link systems) such as an engine, a robotic arm or the skeleton of the human body.
The study of kinematics can be abstracted into purely mathematical expressions. For instance, rotation can be represented by elements of the unit circle in the complex plane. Other planar algebras are used to represent the shear mapping of classical motion in absolute time and space and to represent the Lorentz transformations of relativistic space and time. By using time as a parameter in geometry, mathematicians have developed a science of kinematic geometry.
The use of geometric transformations, also called rigid transformations, to describe the movement of components of a mechanical system simplifies the derivation of its equations of motion, and is central to dynamic analysis.
Kinematic analysis is the process of measuring the kinematic quantities used to describe motion. In engineering, for instance, kinematic analysis may be used to find the range of movement for a given mechanism, and, working in reverse, kinematic synthesis designs a mechanism for a desired range of motion.[9] In addition, kinematics applies algebraic geometry to the study of the mechanical advantage of a mechanical system, or mechanism.
Kinematics of a particle trajectory
Kinematic quantities of a classical particle: mass m, position r, velocity v, acceleration a.
Particle kinematics is the study of the properties of the trajectory of a particle. The position of a particle is defined to be the coordinate vector from the origin of a coordinate frame to the particle. For example, consider a tower 50 m south from your home, where the coordinate frame is located at your home, such that East is the x-direction and North is the y-direction, then the coordinate vector to the base of the tower is r=(0, -50, 0). If the tower is 50 m high, then the coordinate vector to the top of the tower is r=(0, -50, 50).
Usually a three dimensional coordinate systems is used to define the position of a particle. However if the particle is constrained to lie in a plane or on a sphere, a two dimensional coordinate system can be used. All observations in physics are incomplete without the reference frame being specified.
The position vector of a particle is a vector drawn from the origin of the reference frame to the particle. It expresses both the distance of the point from the origin and its direction from the origin. In three dimensions, the position of point P can be expressed as
$\mathbf{P} = (x_P,y_P,z_P) = x_P\vec{i} + y_P\vec{j} + z_P\vec{k},$
where xP, yP, and zP are the Cartesian coordinates and i, j and k are the unit vectors along the x, y, and z coordinate axes, respectively. The magnitude of the position vector |P| gives the distance between the point P and the origin.
$|\mathbf{P}| = \sqrt{x_P^{\ 2} + y_P^{\ 2} + z_P^{\ 2}}.$
The direction cosines of the position vector provide a quantitative measure of direction. It is important to note that the position vector of a particle isn't unique. The position vector of a given particle is different relative to different frames of reference;
The trajectory of a particle is a vector function of time, P(t), which defines the curve traced by the moving particle, given by
$\mathbf{P}(t) = x_P(t)\vec{i} + y_P(t)\vec{j} +z_P(t) \vec{k},$
where the coordinates xP, yP, and zP are each functions of time.
The distance traveled is always greater than or equal to the displacement.
Velocity and speed
The velocity of a particle is a vector that tells about the direction and magnitude of the rate of change of the position vector, that is, how the position of a point changes with each instant of time. Consider the ratio of the difference of two positions of a particle divided by the time interval, which is called the average velocity over that time interval. This average velocity is defined as
$\overline{\mathbf{V}} = \frac {\Delta \mathbf{P}}{\Delta t} \ ,$
where ΔP is the difference in the position vector over the time interval Δt.
In the limit as the time interval Δt becomes smaller and smaller, the average velocity becomes the time derivative of the position vector,
$\mathbf{V} = \lim_{\Delta t\rightarrow0}\frac{\Delta\mathbf{P}}{\Delta t} = \frac {d \mathbf{P}}{d t}=\dot{\mathbf{P}} = \dot{x}_p\vec{i}+\dot{y}_P\vec{j}+\dot{z}_P\vec{k}.$
Thus, velocity is the time rate of change of position, and the dot denotes the derivative with respect to time. Furthermore, the velocity is tangent to the trajectory of the particle.
As a position vector itself is frame dependent, therefore its velocity is also dependent on the reference frame.
The speed of an object is the magnitude |V| of its velocity. It is a scalar quantity:
$|\mathbf{V}| = |\dot{\mathbf{P}} | = \frac {d s}{d t},$
where s is the arc-length measured along the trajectory of the particle. This arc-length traveled by a particle over time is a non-decreasing quantity. Hence, ds/dt is non-negative, which implies that speed is also non-negative.
Acceleration
The acceleration of a particle is the vector defined by the rate of change of the velocity vector. The average acceleration of a particle over a time interval is defined as the ratio
$\overline{\mathbf{A}} = \frac {\Delta \mathbf{V}}{\Delta t} \ ,$
where ΔV is the difference in the velocity vector and Δt is the time interval.
The acceleration of the particle is the limit of the average acceleration as the time interval approaches zero, which is the time derivative,
$\mathbf{A} = \lim_{\Delta t \rightarrow 0} \frac{\Delta \mathbf{V}}{\Delta t} = \frac {d \mathbf{V}}{d t} = \dot{\mathbf{V}} = \ddot{\mathbf{P}} = \ddot{x}_p\vec{i}+\ddot{y}_P\vec{j}+\ddot{z}_P\vec{k}.$
Thus, acceleration is the second derivative of the position vector that defines the trajectory of a particle.
Relative position vector
A relative position vector is a vector that defines the position of a particle relative to another particle. It is the difference in position of the two particles.
If point A has position PA = (xA,yA,zA) and point B has position PB = (xB,yB,zB), the displacement RB/A of B from A is given by
$\mathbf{R}_{B/A} = \mathbf{P}_B - \mathbf{P}_A = (x_B-x_A,y_B-y_A,z_B-z_A).$
Geometrically, the relative position vector RB/A is the vector from point A to point B. The values of the coordinate vectors of points vary with the choice of coordinate frame, however the relative position vector between a pair of points has the same length no matter what coordinate frame is used and is said to be frame invariant.
To describe the motion of a particle B relative to another particle A, we notice that the position B can be formulated as the position of A plus the position of B relative to A, that is
$\mathbf{P}_{B} = \mathbf{P}_{A} + (\mathbf{P}_{B} - \mathbf{P}_{A}) = \mathbf{P}_{A} + \mathbf{R}_{B/A}.$
Relative velocity
Main article: Relative velocity
Relative velocities between two particles in classical mechanics.
The relations between relative positions vectors become relations between relative velocities by computing the time-derivative. The second time derivative yields relations for relative accelerations.
For example, let the particle B move with velocity VB and particle A move with velocity VA in a given reference frame. Then the velocity of B relative to A is given by
$\mathbf{V}_{B/A} = \mathbf{V}_{B} -\mathbf{V}_{A} \,\! .$
This can be obtained by computing the time derivative of the relative position vector RB/A.
This equation provides a formula for the velocity of B in terms of the velocity of A and its relative velocity,
$\mathbf{V}_{B} = \mathbf{V}_{A} + \mathbf{V}_{B/A} \,\! .$
With a large velocity V, where the fraction V/c is significant, c being the speed of light, another scheme of relative velocity called rapidity, that depends on this ratio, is used in special relativity.
Particle trajectories under constant acceleration
Newton's laws state that a constant force acting on a particle generates a constant acceleration. For example, a particle in a parallel gravity field experiences a force acting downwards that is proportional to the constant acceleration of gravity, and no force in the horizontal direction. This is called projectile motion.
If the acceleration vector A of a particle P is constant in magnitude and direction, the particle is said to be undergoing uniformly accelerated motion. In this case, the trajectory P(t) of the particle can be obtained by integrating the acceleration A with respect to time.
The first integral yields the velocity of the particle,
$\mathbf{V}(t) = \int_0^{t} \mathbf{A} dt = \mathbf{A}t + \mathbf{V}_0.$
A second integration yields its trajectory,
$\mathbf{P}(t) = \int_0^t \mathbf{V}(t) dt = \int(\mathbf{A}t + \mathbf{V}_0)dt = \tfrac{1}{2} \mathbf{A} t^2 + \mathbf{V}_0 t + \mathbf{P}_0.$
Additional relations between displacement, velocity, acceleration, and time can be derived. Since A = (V − V0)/t,
$\mathbf{P}(t) = \mathbf{P}_0 + \left(\frac{\mathbf{V}+ \mathbf{V}_0}{2}\right) t .$
By using the definition of an average, this equation states that when the acceleration is constant average velocity times time equals displacement.
A relationship without explicit time dependence may also be derived using the relation At = V − V0,
$(\mathbf{P} - \mathbf{P}_0) \cdot \mathbf{A} t = \left( \mathbf{V} - \mathbf{V}_0 \right) \cdot \frac{\mathbf{V} + \mathbf{V}_0}{2} t \ ,$
where · denotes the dot product. Divide both sides by t and expand the dot-products to obtain,
$2(\mathbf{P} - \mathbf{P}_0) \cdot \mathbf{A} = |\mathbf{V}|^2 - |\mathbf{V}_0|^2.$
In the case of straight-line motion, where P and P0 are parallel to A, this equation becomes
$|\mathbf{V}|^2= |\mathbf{V}_0|^2 + 2 |\mathbf{A}|(|\mathbf{P}-\mathbf{P}_0|).$
This can be simplified using the notation |A|=a, |V|=v, and |P|=r, so
$v^2= v_0^2 + 2a(r-r_0).$
This relation is useful when time is not known explicitly.
Figure 2: Velocity and acceleration for nonuniform circular motion: the velocity vector is tangential to the orbit, but the acceleration vector is not radially inward because of its tangential component aθ that increases the rate of rotation: dω/dt = |aθ|/R.
Particle trajectories in cylindrical-polar coordinates
It is often convenient to formulate the trajectory of a particle P(t) = (X(t), Y(t) and Z(t)) using polar coordinates in the X-Y plane. In this case, its velocity and acceleration take a convenient form.
Recall that the trajectory of a particle P is defined by its coordinate vector P measured in a fixed reference frame F. As the particle moves, its coordinate vector P(t) traces its trajectory, which is a curve in space, given by
$\textbf{P}(t) = X(t)\vec{i} + Y(t)\vec{j} + Z(t)\vec{k},$
where i, j, and k are the unit vectors along the X, Y and Z axes of the reference frame F, respectively.
Consider a particle P that moves on the surface of a circular cylinder, it is possible to align the Z axis of the fixed frame F with the axis of the cylinder. Then, the angle θ around this axis in the X-Y plane can be used to define the trajectory as,
$\textbf{P}(t) = R\cos\theta(t)\vec{i} + R\sin\theta(t)\vec{j} + Z(t)\vec{k}.$
The cylindrical coordinates for P(t) can be simplified by introducing the radial and tangential unit vectors,
$\textbf{e}_r = \cos\theta(t)\vec{i} + \sin\theta(t)\vec{j}, \quad \textbf{e}_t = -\sin\theta(t)\vec{i} + \cos\theta(t)\vec{j}.$
Using this notation, P(t) takes the form,
$\textbf{P}(t) = R\textbf{e}_r + Z(t)\vec{k},$
where R is constant.
Now, in general, the trajectory P(t) is not constrained to lie on a circular cylinder, so the radius R varies with time, and the trajectory in cylindrical-polor coordinates becomes
$\textbf{P}(t) = R(t)\textbf{e}_r + Z(t)\vec{k}.$
The velocity vector VP is the time derivative of the trajectory P(t), which yields,
$\textbf{V}_P = \frac{d}{dt}(R(t)\textbf{e}_r + Z(t)\vec{k}) = \dot{R}\textbf{e}_r + R\dot{\theta}\textbf{e}_t + \dot{Z}\vec{k},$
where
$\frac{d}{dt}\textbf{e}_r = \dot{\theta}\textbf{e}_t.$
In this case, the acceleration AP, which is the time derivative of the velocity VP, is given by
$\textbf{A}_P = \frac{d}{dt}(\dot{R}\textbf{e}_r + R\dot{\theta}\textbf{e}_t + \dot{Z}(t)\vec{k}) = (\ddot{R} - R\dot{\theta}^2)\textbf{e}_r + (R\ddot{\theta} + 2\dot{R}\dot{\theta})\textbf{e}_t + \ddot{Z}(t)\vec{k}.$
If the radius is constant
If the trajectory of the particle is constrained to lie on a cylinder, then the radius R is constant and the velocity and acceleration vectors simplify. The velocity of VP is the time derivative of the trajectory P(t),
$\textbf{V}_P = \frac{d}{dt}(R\textbf{e}_r + Z(t)\vec{k}) = R\dot{\theta}\textbf{e}_t + \dot{Z}\vec{k}.$
The acceleration vector becomes
$\textbf{A}_P = \frac{d}{dt}(R\dot{\theta}\textbf{e}_t + \dot{Z}\vec{k}) = - R\dot{\theta}^2\textbf{e}_r + R\ddot{\theta}\textbf{e}_t + \ddot{Z}\vec{k}.$
Planar circular trajectories
Each particle on the wheel travels in a planar circular trajectory (Kinematics of Machinery, 1876).[10]
A special case of a particle trajectory on a circular cylinder occurs when there is no movement along the Z axis, in which case
$\textbf{P}(t) = R\textbf{e}_r + Z_0\vec{k},$
where R and Z0 are constants. In this case, the velocity VP is given by
$\textbf{V}_P = \frac{d}{dt}(R\textbf{e}_r + Z_0\vec{k}) = R\dot{\theta}\textbf{e}_t =R\omega\textbf{e}_t,$
where
$\omega = \dot{\theta},$
is the angular velocity of the unit vector et around the z axis of the cylinder.
The acceleration AP of the particle P is now given by
$\textbf{A}_P = \frac{d}{dt}(R\dot{\theta}\textbf{e}_t) = - R\dot{\theta}^2\textbf{e}_r + R\ddot{\theta}\textbf{e}_t.$
The components
$a_r = - R\dot{\theta}^2, \quad a_t = R\ddot{\theta},$
are called the radial and tangential components of acceleration, respectively.
The notation for angular velocity and angular acceleration is often defined as
$\omega = \dot{\theta}, \quad \alpha = \ddot{\theta},$
so the radial and tangential acceleration components for circular trajectories are also written as
$a_r = - R\omega^2, \quad a_t = R\alpha.$
Point trajectories in a body moving in the plane
The movement of components of a mechanical system is analyzed by attaching a reference frame to each part and determining how the reference frames move relative to each other. If the structural strength of the parts are sufficient then their deformation can be neglected and rigid transformations used to define this relative movement. This brings geometry into the study of mechanical movement.
Geometry is the study of the properties of figures that remain the same while the space is transformed in various ways---more technically, it is the study of invariants under a set of transformations.[11] Perhaps best known is high school Euclidean geometry where planar triangles are studied under congruent transformations, also called isometries or rigid transformations. These transformations displace the triangle in the plane without changing the angle at each vertex or the distances between vertices. Kinematics is often described as applied geometry, where the movement of a mechanical system is described using the rigid transformations of Euclidean geometry.
The coordinates of points in the plane are two dimensional vectors in R2, so rigid transformations are those that preserve the distance measured between any two points. The Euclidean distance formula is simply the Pythagorean theorem. The set of rigid transformations in an n-dimensional space is called the special Euclidean group on Rn, and denoted SE(n).
Displacements and motion
The movement of each of the components of the Boulton & Watt Steam Engine (1784) is modeled by a continuous set of rigid displacements.
The position of one component of a mechanical system relative to another is defined by introducing a reference frame, say M, on one that moves relative to a fixed frame, F, on the other. The rigid transformation, or displacement, of M relative to F defines the relative position of the two components. A displacement consists of the combination of a rotation and a translation.
The set of all displacements of M relative to F is called the configuration space of M. A smooth curve from one position to another in this configuration space is a continuous set of displacements, called the motion of M relative to F. The motion of a body consists of a continuous set of rotations and translations.
Matrix representation
The combination of a rotation and translation in the plane R2 can be represented by a certain type of 3x3 matrix known as a homogeneous transform. The 3x3 homogenous transform is constructed from a 2x2 rotation matrix A(φ) and the 2x1 translation vector d=(dx, dy), as
$[T(\phi, \mathbf{d})] = \begin{bmatrix} A(\phi) & \mathbf{d} \\ 0, 0 & 1\end{bmatrix} = \begin{bmatrix} \cos\phi & -\sin\phi & d_x \\ \sin\phi & \cos\phi & d_y \\ 0 & 0 & 1\end{bmatrix}.$
These homogeneous transforms perform rigid transformations on the points in the plane z=1, that is on points with coordinates p=(x, y, 1).
In particular, let p define the coordinates of points in a reference frame M coincident with a fixed frame F. Then, when the origin of M is displaced by the translation vector d relative to the origin of F and rotated by the angle φ relative to the x-axis of F, the new coordinates in F of points in M are given by
$\textbf{P} = [T(\phi, \mathbf{d})]\textbf{p} = \begin{bmatrix} \cos\phi & -\sin\phi & d_x \\ \sin\phi & \cos\phi & d_y \\ 0 & 0 & 1\end{bmatrix}\begin{Bmatrix}x\\y\\1\end{Bmatrix}.$
Homogeneous transforms represent affine transformations. This formulation is necessary because a translation is not a linear transformation of R2. However, using projective geometry, so that R2 is considered to be a subset of R3, translations become affine linear transformations.[12]
Pure translation
If a rigid body moves so that its reference frame M does not rotate relative to the fixed frame F, the motion is said to be pure translation. In this case, the trajectory of every point in the body is an offset of the trajectory d(t) of the origin of M, that is,
$\textbf{P}(t)=[T(0,\textbf{d}(t))]\textbf{p} = \textbf{d}(t) + \textbf{p}.$
Thus, for bodies in pure translation the velocity and acceleration of every point P in the body are given by
$\textbf{V}_P=\dot{\textbf{P}}(t) = \dot{\textbf{d}}(t)=\textbf{V}_O,\quad \textbf{A}_P=\ddot{\textbf{P}}(t) = \ddot{\textbf{d}}(t) = \textbf{A}_O,$
where the dot denotes the derivative with respect to time and VO and AO are the velocity and acceleration, respectively, of the origin of the moving frame M. Recall the coordinate vector p in M is constant, so its derivative is zero.
Rotation of a body around a fixed axis
Main article: Circular motion
Figure 1: The angular velocity vector Ω points up for counterclockwise rotation and down for clockwise rotation, as specified by the right-hand rule. Angular position θ(t) changes with time at a rate ω(t) = dθ/dt.
Rotational or angular kinematics is the description of the rotation of an object.[13] The description of rotation requires some method for describing orientation. Common descriptions include Euler angles and the kinematics of turns induced by algebraic products.
In what follows, attention is restricted to simple rotation about an axis of fixed orientation. The z-axis has been chosen for convenience.
Position: This allows the description of a rotation as the angular position of a planar reference frame M relative to a fixed F about this shared z-axis. Coordinates p=(x, y) in M are related to coordinates P=(X, Y) in F by the matrix equation:
$\mathbf{P}(t) = [A(t)]\mathbf{p},$
where
$[A(t)] = \begin{bmatrix} \cos\theta(t) & -\sin\theta(t) \\ \sin\theta(t) & \cos\theta(t) \end{bmatrix},$
is the rotation matrix that defines the angular position of M relative to F.
Velocity: If the point p does not move in M, then its velocity in F is given by
$\mathbf{V}_P = \dot{\mathbf{P}} = [\dot{A}(t)]\mathbf{p}.$
It is convenient to eliminate the coordinates p and write this as an operation on the trajectory P(t),
$\mathbf{V}_P = [\dot{A}(t)][A(t)^{-1}]\mathbf{P} = [\Omega]\mathbf{P},$
where the matrix
$[\Omega] = \begin{bmatrix} 0 & -\omega \\ \omega & 0 \end{bmatrix},$
is known as the angular velocity matrix of M relative to F. The parameter ω is the time derivative of the angle θ, that is
$\omega = \frac{d\theta}{dt}.$
Acceleration: The acceleration of P(t) in F is obtained as the time derivative of the velocity,
$\mathbf{A}_P = \ddot{P}(t) = [\dot{\Omega}]\mathbf{P} + [\Omega]\dot{\mathbf{P}},$
which becomes
$\mathbf{A}_P = [\dot{\Omega}]\mathbf{P} + [\Omega][\Omega]\mathbf{P},$
where
$[\dot{\Omega}] = \begin{bmatrix} 0 & -\alpha \\ \alpha & 0 \end{bmatrix},$
is the angular acceleration matrix of M on F, and
$\alpha = \frac{d^2\theta}{dt^2}.$
Description of rotation then involves these three quantities:
• Angular position: The oriented distance from a selected origin on the rotational axis to a point of an object is a vector r ( t ) locating the point. The vector r(t) has some projection (or, equivalently, some component) r⊥(t) on a plane perpendicular to the axis of rotation. Then the angular position of that point is the angle θ from a reference axis (typically the positive x-axis) to the vector r⊥(t) in a known rotation sense (typically given by the right-hand rule).
• Angular velocity: The angular velocity ω is the rate at which the angular position θ changes with respect to time t:
$\omega = \frac {d\theta}{dt}$
The angular velocity is represented in Figure 1 by a vector Ω pointing along the axis of rotation with magnitude ω and sense determined by the direction of rotation as given by the right-hand rule.
• Angular acceleration: The magnitude of the angular acceleration α is the rate at which the angular velocity ω changes with respect to time t:
$\alpha = \frac {d\omega}{dt}$
The equations of translational kinematics can easily be extended to planar rotational kinematics for constant angular acceleration with simple variable exchanges:
$\omega_{\mathrm{f}} = \omega_{\mathrm{i}} + \alpha t\!$
$\theta_{\mathrm{f}} - \theta_{\mathrm{i}} = \omega_{\mathrm{i}} t + \tfrac{1}{2} \alpha t^2$
$\theta_{\mathrm{f}} - \theta_{\mathrm{i}} = \tfrac{1}{2} (\omega_{\mathrm{f}} + \omega_{\mathrm{i}})t$
$\omega_{\mathrm{f}}^2 = \omega_{\mathrm{i}}^2 + 2 \alpha (\theta_{\mathrm{f}} - \theta_{\mathrm{i}}).$
Here θi and θf are, respectively, the initial and final angular positions, ωi and ωf are, respectively, the initial and final angular velocities, and α is the constant angular acceleration. Although position in space and velocity in space are both true vectors (in terms of their properties under rotation), as is angular velocity, angle itself is not a true vector.
Point trajectories in body moving in three dimensions
Important formulas in kinematics define the velocity and acceleration of points in a moving body as they trace trajectories in three dimensional space. This is particularly important for the center of mass of a body, which is used to derive equations of motion using either Newton's second law or Lagrange's equations.
Position
In order to define these formulas, the movement of a component B of a mechanical system is defined by the set of rotations [A(t)] and translations d(t) assembled into the homogenous transformation [T(t)]=[A(t), d(t)]. Let p be the coordinates of a point P in B measured in the moving reference frame M, then the trajectory of this point traced in F is given by
$\textbf{P}(t)=[T(t)]\textbf{p} = \begin{Bmatrix} \textbf{P} \\ 1\end{Bmatrix}=\begin{bmatrix} A(t) & \textbf{d}(t) \\ 0 & 1\end{bmatrix} \begin{Bmatrix} \textbf{p} \\ 1\end{Bmatrix}.$
This notation does not distinguish between P = (X, Y, Z, 1), and P = (X, Y, Z), which is hopefully clear in context.
This equation for the trajectory of P can be inverted to compute the coordinate vector p in M as,
$\textbf{p} = [T(t)]^{-1}\textbf{P}(t) = \begin{Bmatrix} \textbf{p} \\ 1\end{Bmatrix}=\begin{bmatrix} A(t)^T & -A(t)^T\textbf{d}(t) \\ 0 & 1\end{bmatrix} \begin{Bmatrix} \textbf{P}(t) \\ 1\end{Bmatrix}.$
This expression uses the fact that the transpose of a rotation matrix is also its inverse, that is
$[A(t)]^T[A(t)]=I.\!$
Velocity
The velocity of the point P along its trajectory P(t) is obtained as the time derivative of this position vector,
$\textbf{V}_P = [\dot{T}(t)]\textbf{p} = \begin{Bmatrix} \textbf{V}_P \\ 0\end{Bmatrix} = \begin{bmatrix} \dot{A}(t) & \dot{\textbf{d}}(t) \\ 0 & 0 \end{bmatrix} \begin{Bmatrix} \textbf{p} \\ 1\end{Bmatrix}.$
The dot denotes the derivative with respect to time, and because p is constant its derivative is zero.
This formula can be modified to obtain the velocity of P by operating on its trajectory P(t) measured in the fixed frame F. Substitute the inverse transform for p into the velocity equation to obtain
$\textbf{V}_P = [\dot{T}(t)][T(t)]^{-1}\textbf{P}(t) = \begin{Bmatrix} \textbf{V}_P \\ 0\end{Bmatrix} = \begin{bmatrix} \dot{A}A^T & -\dot{A}A^T\textbf{d} + \dot{\textbf{d}} \\ 0 & 0 \end{bmatrix} \begin{Bmatrix} \textbf{P}(t) \\ 1\end{Bmatrix}=[S]\textbf{P}.$
The matrix [S] is given by
$[S] = \begin{bmatrix} \Omega & -\Omega\textbf{d} + \dot{\textbf{d}} \\ 0 & 0 \end{bmatrix}$
where
$[\Omega] = \dot{A}A^T,$
is the angular velocity matrix.
Multiplying by the operator [S], the formula for the velocity VP takes the form
$\textbf{V}_P = [\Omega](\textbf{P}-\textbf{d}) + \dot{\textbf{d}} = \omega\times \textbf{R}_{P/O} + \textbf{V}_O,$
where the vector ω is the angular velocity vector obtained from the components of the matrix [Ω], the vector
$\textbf{R}_{P/O}=\textbf{P}-\textbf{d},$
is the position of P relative to the origin O of the moving frame M, and
$\textbf{V}_O=\dot{\textbf{d}},$
is the velocity of the origin O.
Acceleration
The acceleration of a point P in a moving body B is obtained as the time derivative of its velocity vector,
$\textbf{A}_P = \frac{d}{dt}\textbf{V}_P = \frac{d}{dt}\big([S]\textbf{P}\big)=[\dot{S}]\textbf{P} + [S]\dot{\textbf{P}} = [\dot{S}]\textbf{P} + [S][S]\textbf{P} .$
This equation can be expanded by first computing
$[\dot{S}] = \begin{bmatrix} \dot{\Omega} & -\dot{\Omega}\textbf{d} -\Omega\dot{\textbf{d}} + \ddot{\textbf{d}} \\ 0 & 0 \end{bmatrix} = \begin{bmatrix} \dot{\Omega} & -\dot{\Omega}\textbf{d} -\Omega\textbf{V}_O + \textbf{A}_O \\ 0 & 0 \end{bmatrix}$
and
$[S]^2 = \begin{bmatrix} \Omega & -\Omega\textbf{d} + \textbf{V}_O \\ 0 & 0 \end{bmatrix}^2 = \begin{bmatrix} \Omega^2 & -\Omega^2\textbf{d} + \Omega\textbf{V}_O \\ 0 & 0 \end{bmatrix}.$
The formula for the acceleration AP can now be obtained as
$\textbf{A}_P = \dot{\Omega}(\textbf{P} - \textbf{d}) + \textbf{A}_O + \Omega^2(\textbf{P}-\textbf{d}),$
or
$\textbf{A}_P = \alpha\times\textbf{R}_{P/O} + \omega\times\omega\times\textbf{R}_{P/O} + \textbf{A}_O,$
where α is the angular acceleration vector obtained from the derivative of the angular velocity matrix,
$\textbf{R}_{P/O}=\textbf{P}-\textbf{d},$
is the relative position vector, and
$\textbf{A}_O = \ddot{\textbf{d}}$
is the acceleration of the origin of the moving frame M.
Kinematic constraints
Kinematic constraints are constraints on the movement of components of a mechanical system. Kinematic constraints can be considered to have two basic forms, (i) constraints that arise from hinges, sliders and cam joints that define the construction of the system, called holonomic constraints, and (ii) constraints imposed on the velocity of the system such as the knife-edge constraint of ice-skates on a flat plane, or rolling without slipping of a disc or sphere in contact with a plane, which are called non-holonomic constraints. Constraints can also arise from other interactions such as rolling without slipping, is any condition relating properties of a dynamic system that must hold true at all times. Below are some common examples:
Kinematic coupling
A kinematic coupling exactly constrains all 6 degrees of freedom.
Rolling without slipping
An object that rolls against a surface without slipping obeys the condition that the velocity of its center of mass is equal to the cross product of its angular velocity with a vector from the point of contact to the center of mass,
$\boldsymbol{ v}_G(t) = \boldsymbol{\Omega} \times \boldsymbol{ r}_{G/O}.$
For the case of an object that does not tip or turn, this reduces to v = R ω.
Inextensible cord
This is the case where bodies are connected by an idealized cord that remains in tension and cannot change length. The constraint is that the sum of lengths of all segments of the cord is the total length, and accordingly the time derivative of this sum is zero.[14][15][16] A dynamic problem of this type is the pendulum. Another example is a drum turned by the pull of gravity upon a falling weight attached to the rim by the inextensible cord.[17] An equilibrium problem (not kinematic) of this type is the catenary.[18]
Kinematic pairs
Main article: kinematic pair
Reuleaux called the ideal connections between components that form a machine kinematic pairs. He distinguished between higher pairs which were said to have line contact between the two links and lower pairs that have area contact between the links. J. Phillips[19] shows that there are many ways to construct pairs that do not fit this simple classification.
Lower pair: A lower pair is an ideal joint, or holonomic constraint, that maintains contact between a point, line or plane in a moving solid (three dimensional) body to a corresponding point line or plane in the fixed solid body. We have the following cases:
• A revolute pair, or hinged joint, requires a line, or axis, in the moving body to remain co-linear with a line in the fixed body, and a plane perpendicular to this line in the moving body maintain contact with a similar perpendicular plane in the fixed body. This imposes five constraints on the relative movement of the links, which therefore has one degree of freedom, which is pure rotation about the axis of the hinge.
• A prismatic joint, or slider, requires that a line, or axis, in the moving body remain co-linear with a line in the fixed body, and a plane parallel to this line in the moving body maintain contact with a similar parallel plan in the fixed body. This imposes five constraints on the relative movement of the links, which therefore has one degree of freedom. This degree of freedom is the distance of the slide along the line.
• A cylindrical joint requires that a line, or axis, in the moving body remain co-linear with a line in the fixed body. It is a combination of a revolute joint and a sliding joint. This joint has two degrees of freedom. The position of the moving body is defined by both the rotation about and slide along the axis.
• A spherical joint, or ball joint, requires that a point in the moving body maintain contact with a point in the fixed body. This joint has three degrees of freedom.
• A planar joint requires that a plane in the moving body maintain contact with a plane in fixed body. This joint has three degrees of freedom.
Higher pairs: Generally, a higher pair is a constraint that requires a curve or surface in the moving body to maintain contact with a curve or surface in the fixed body. For example, the contact between a cam and its follower is a higher pair called a cam joint. Similarly, the contact between the involute curves that form the meshing teeth of two gears are cam joints.
Kinematic chains
Illustration of a four-bar linkage from Kinematics of Machinery, 1876
Rigid bodies, or links, connected by kinematic pairs, or joints, are called kinematic chains. Mechanisms and robots are examples of kinematic chains. The degree of freedom of a kinematic chain is computed from the number of links and the number and type of joints using the mobility formula. This formula can also be used to enumerate the topologies of kinematic chains that have a given degree of freedom, which is known as type synthesis in machine design.
Examples of kinematic chains: The planar one degree-of-freedom linkages assembled from N links and j hinged or sliding joints are:
• N=2, j=1: this is a two-bar linkage known as the lever;
• N=4, j=4: this is the four-bar linkage;
• N=6, j=7: this is a six-bar linkage. A six-bar linkage must have two links that support three joints, called ternary links. There are two distinct topologies that depend on how the two ternary linkages are connected. In the Watt topology, the two ternary links have a common joint. In the Stephenson topology the two ternary links do not have a common joint and are connected by binary links;[20]
• N=8, j=10: the eight-bar linkage has 16 different topologies;
• N=10, j=13: the 10-bar linkage has 230 different topologies,
• N=12, j=16: the 12-bar has 6856 topologies.
See Sunkari and Schmidt[21] for the number of 14- and 16-bar topologies, as well as the number of linkage topologies that have two, three and four degrees-of-freedom.
References
1. Edmund Taylor Whittaker (1904). A Treatise on the Analytical Dynamics of Particles and Rigid Bodies. Cambridge University Press. Chapter 1. ISBN 0-521-35883-3.
2. Joseph Stiles Beggs (1983). Kinematics. Taylor & Francis. p. 1. ISBN 0-89116-355-7.
3.
4. Ampère, André-Marie. Essai sur la Pilosophie des Sciences. Chez Bachelier.
5.
6. O. Bottema & B. Roth (1990). Theoretical Kinematics. Dover Publications. preface, p. 5. ISBN 0-486-66346-9.
7. See, for example: Russell C. Hibbeler (2009). "Kinematics and kinetics of a particle". Engineering Mechanics: Dynamics (12th ed.). Prentice Hall. p. 298. ISBN 0-13-607791-9. , Ahmed A. Shabana (2003). "Reference kinematics". Dynamics of Multibody Systems (2nd ed.). Cambridge University Press. ISBN 978-0-521-54411-5. , P. P. Teodorescu (2007). "Kinematics". Mechanical Systems, Classical Models: Particle Mechanics. Springer. p. 287. ISBN 1-4020-5441-6.
8. A. Biewener (2003). Animal Locomotion. Oxford University Press. ISBN 19850022X Check `|isbn=` value (help).
9. Reuleaux, F.; Kennedy, Alex B. W. (1876), The Kinematics of Machinery: Outlines of a Theory of Machines, London: Macmillan
10. Geometry:the study of properties of given elements that remain invariant under specified transformations.
11. Paul, Richard (1981). Robot manipulators: mathematics, programming, and control : the computer control of robot manipulators. MIT Press, Cambridge, MA. ISBN 978-0-262-16082-7.
12. R. Douglas Gregory (2006). Chapter 16. Cambridge: Cambridge University. ISBN 0-521-82678-0.
13. William Thomson Kelvin & Peter Guthrie Tait (1894). Elements of Natural Philosophy. Cambridge University Press. p. 4. ISBN 1-57392-984-0.
14.
15. M. Fogiel (1980). "Problem 17-11". The Mechanics Problem Solver. Research & Education Assoc. p. 613. ISBN 0-87891-519-2.
16. Irving Porter Church (1908). Mechanics of Engineering. Wiley. p. 111. ISBN 1-110-36527-6.
17. Morris Kline (1990). Mathematical Thought from Ancient to Modern Times. Oxford University Press. p. 472. ISBN 0-19-506136-5.
18. Phillips, Jack (2007). Freedom in Machinery, Volumes 1-2 (reprint ed.). Cambridge University Press. ISBN 978-0-521-67331-0.
19. Tsai, Lung-Wen (2001). Mechanism design:enumeration of kinematic structures according to function (llustrated ed.). CRC Press. p. 121. ISBN 978-0-8493-0901-4.
20. R. P. Sunkari and L. C. Schmidt, "Structural synthesis of planar kinematic chains by adapting a Mckay-type algorithm," Mechanism and Machine Theory 41 (2006) 1021–1030
Further reading
• Moon, Francis C. (2007). The Machines of Leonardo Da Vinci and Franz Reuleaux, Kinematics of Machines from the Renaissance to the 20th Century. Springer. ISBN 978-1-4020-5598-0.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 74, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8878139853477478, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/125877/generating-random-array-in-maple
|
# Generating random array in Maple
I'm trying to do simulation in Maple, but I can't figure out how to do the following:
How does one generate a set of random whole numbers in an array of 24 element (in 1 column) where the sum of the numbers has to be 10 and each numbers must be between 0 and 10?
-
What distribution governs the randomness of these numbers ? Could it be the multinomial ? If so, you should sample from multinomial distribution. – Sasha Mar 29 '12 at 12:26
1
– user2468 Mar 29 '12 at 17:38
## 2 Answers
If we produce all the partitions of 10 (of which there are 42) then we can pick `p` from that collection randomly (uniformly, if you want). There will be `np` elements in the chosen partition. We can choose `np` of the naturals 1..24, and use those as the positions of the Vector `V` to which to assign the entries of `p`.
The following acts inplace on Vector `V`. Initialization to precompute the set of all partitions of 10, and to construct V and the random generating function (1..42), takes almost no time at all.
Subsequent generation of a solution, populating V, takes about 0.00125 sec on an Intel i7.
````restart:
randomize(): # different results for each Maple session
interface(rtablesize=24):
G:=proc(v::Vector,y,all) local i,p,np,pos;
ArrayTools:-Fill(0,v);
p:=all[y()];
np:=nops(p);
pos:=[combinat[randcomb](24,np)[]];
for i from 1 to np do v[pos[i]]:=p[i]; end do;
NULL; # acts in-place on V
end proc:
st:=time():
All:=combinat[partition](10):
Y:=rand(1..nops(All)):
V:=Vector[row](24):
time()-st;
0.
G(V,Y,All);
V;
[0, 0, 1, 0, 1, 0, 0, 0, 2, 0, 0, 0, 2, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 2]
G(V,Y,All);
V;
[0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 4, 0, 0, 4, 0, 0, 0, 0, 0]
G(V,Y,All);
V;
[0, 0, 0, 0, 2, 2, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 4]
st:=time():
for j from 1 to 10^3 do G(V,Y,All); end do:
time()-st; # time for 1000 regenerations
0.125
````
The above picks from the complete set of partitions of 10 (ie. "ways to split up 10 as a sum of positive integers") with each having an equal weight, hence a uniform discrete distribution. That might not be what you want. Another way to generate each `p` is to randomly select values from {1..10} while computing a running total, stopping whenever the running total is exactly 10, and rejecting/reselecting each chosen value if it pushes the running total over 10.
-
Here is solution for $10$ elements :
````with(RandomTools):
s:=0:
A10:=Vector(1..10):
while not(s = 10) do
for n from 1 to 10 do
a := Generate(integer(range =0..10)):
A10[n]:=a;
s:=s+a:
if s >10 then
n:=0;
s:=0;
A10:=Vector(1..10);
end if;
end do;
end do;
A10;
````
For some reason I couldn't generate Vector with 24 elements .
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7613423466682434, "perplexity_flag": "head"}
|
http://amathew.wordpress.com/2012/10/20/a-spectrum-acyclic-with-respect-to-itself/
|
# Climbing Mount Bourbaki
Thoughts on mathematics
October 20, 2012
## A spectrum acyclic with respect to itself
Posted by Akhil Mathew under topology | Tags: Brown representability, Brown-Comenentz duality |
[2] Comments
Let ${X}$ be a connective spectrum with finitely generated homotopy groups. Then the lowest homotopy group in ${\pi_*(X \wedge X)}$ is the tensor square of the lowest homotopy group in ${\pi_*(X)}$: in particular, $X \wedge X$ is never zero (i.e., contractible). The purpose of this post is to describe an example of a nontrivial spectrum ${I}$ with ${I \wedge I \simeq 0}$. I learned this example from Hovey and Strickland’s “Morava ${K}$-theories and localization.”
1. A non-example
To start with, here’s a spectrum which does not work: ${H \mathbb{Q}/\mathbb{Z}}$. This is a natural choice because
$\displaystyle \mathbb{Q}/\mathbb{Z} \otimes_{\mathbb{Z}} \mathbb{Q}/\mathbb{Z} = 0.$
On the other hand, from the cofiber sequence
$\displaystyle H \mathbb{Z} \rightarrow H \mathbb{Q} \rightarrow H \mathbb{Q}/\mathbb{Z} \rightarrow \Sigma H \mathbb{Z},$
we obtain a cofiber sequence
$\displaystyle H \mathbb{Z} \wedge H \mathbb{Q}/\mathbb{Z} \rightarrow 0 \rightarrow (H \mathbb{Q}/\mathbb{Z})^{\wedge 2} \rightarrow \Sigma H \mathbb{Z} \wedge H \mathbb{Q}/\mathbb{Z}$
which shows in particular that
$\displaystyle (H \mathbb{Q}/\mathbb{Z})^{\wedge 2} \simeq \Sigma H \mathbb{Z} \wedge H \mathbb{Q}/\mathbb{Z};$
in particular, its ${\pi_1}$ is isomorphic to ${\mathbb{Q}/\mathbb{Z}}$, not zero.
2. Brown-Comentz duality
A useful way of producing interesting spectra is the Brown representability theorem. In order to produce a spectrum, it suffices to give a cohomology (or homology, by Adams’s variant of the theorem) theory ${H^*}$ on pointed spaces, or on spectra. An example of a homology theory
$\displaystyle \mathbf{Sp} \rightarrow \mathbf{Ab}$
is given by stable homotopy ${\pi_0}$. This functor has the defining property of a homology theory: that is, given a cofiber sequence of spectra
$\displaystyle X' \rightarrow X \rightarrow X'' \rightarrow \Sigma X,$
one has an exact sequence
$\displaystyle \pi_0 X ' \rightarrow \pi_0 X \rightarrow \pi_0 X''.$
Given an injective abelian group, we can dualize this. Take for example ${\mathbb{Q}/\mathbb{Z}}$; we find that for any cofiber sequence as above, there is an exact sequence
$\displaystyle \hom( \pi_0 X'', \mathbb{Q}/\mathbb{Z}) \rightarrow \hom(\pi_0 X, \mathbb{Q}/\mathbb{Z}) \rightarrow \hom(\pi_0 X',\mathbb{Q}/\mathbb{Z})$
because the functor ${\hom(\cdot, \mathbb{Q}/\mathbb{Z})}$ is exact. In particular, the functor ${\mathbf{Sp} \rightarrow \mathbf{Ab}}$ given by ${\hom( \pi_0(\cdot), \mathbb{Q}/\mathbb{Z})}$ (that is, the dual to stable homotopy) defines a cohomology theory. Applying the Brown representability theorem yields:
Theorem 1 There is a spectrum ${I}$ with the property that, for any spectrum ${X}$, we have a natural isomorphism
$\displaystyle [X, I] \simeq \hom(\pi_0 X, \mathbb{Q}/\mathbb{Z}).$
The spectrum ${I}$ is called the Brown-Comenentz dual of the sphere. One can generalize this construction to dualizing any homology theory to a cohomology theory. That is, given a spectrum ${E}$, it defines a homology theory
$\displaystyle X \mapsto E_0(X) \stackrel{\mathrm{def}}{=}\pi_0( E \wedge X)$
and thus a dual cohomology theory
$\displaystyle X \mapsto \hom(\pi_0(E \wedge X), \mathbb{Q}/\mathbb{Z})$
which we can conclude is representable by a spectrum ${cE}$, called the Brown-Comenentz dual of ${E}$. This is nothing really new: in fact,
$\displaystyle cE = \mathrm{Fun}(E, I)$
as an unwinding of the ${\wedge, \mathrm{Fun}}$ adjunction (the analog in stable homotopy theory of the tensor-hom adjunction) shows. This definition gives a natural map
$\displaystyle E \rightarrow cc E$
which is an equivalence when ${E}$ has finite homotopy groups, by ordinary Pontryagin duality for finite abelian groups.
3. ${I \wedge I}$
The claim is that ${I}$ is an example of a spectrum such that ${I \wedge I = 0}$. Note that
$\displaystyle \pi_0 I = [S, I] = \hom( \pi_0 S, \mathbb{Q}/\mathbb{Z}) = \mathbb{Q}/\mathbb{Z},$
while ${\pi_i I = 0}$ for ${i > 0}$. In particular, ${I}$ is nonconnective; its negative homotopy groups are dual to the stable homotopy groups of spheres.
The goal of this blog post is to describe the proof of:
Theorem 2 ${I \wedge I = 0}$.
In order to see this, we will show first that
$\displaystyle I \wedge H \mathbb{F}_p = 0 . \ \ \ \ \ (1)$
In fact, the spectrum ${I \wedge H \mathbb{F}_p}$ clearly has ${p}$-torsion homotopy groups (namely, the mod ${p}$ homology groups of ${I}$). It therefore suffices to show that ${I \wedge H \mathbb{F}_p \wedge S/p = 0}$. Here we consider the Moore spectrum ${S/p}$: this is the cofiber of the multiplication by ${p}$ map ${S \stackrel{p}{\rightarrow} S}$, and is consequently a torsion spectrum.
But ${S/p}$ is self-dual under Spanier-Whitehead duality, up to suspension, so that ${ I \wedge S/p }$ is a suspension of ${\mathrm{Fun}(S/p, I) = c S/p}$. We are reduced to showing that
$\displaystyle \pi_* ( H \mathbb{F}_p \wedge c (S/p)) = 0.$
But
$\displaystyle \hom(\pi_* ( H \mathbb{F}_p \wedge c (S/p)), \mathbb{Q}/\mathbb{Z}) = [H \mathbb{F}_p, cc S/p]_{-*} = [ H \mathbb{F}_p, S/p]_{-*} = 0,$
because it is a theorem that there are no nontrivial maps from ${H \mathbb{F}_p}$ to a finite spectrum. (A nice proof can be found in Ravenel’s “Localization with respect to periodic homology theories” paper.) Since ${\pi_* ( H \mathbb{F}_p \wedge c (S/p))}$ is a torsion group, it must vanish if its dual does. This proves (1).
The rest of the proof is now “formal.” The class of spectra which are ${I}$-acyclic (i.e., which smash to zero with ${I}$) is closed under homotopy colimits and (de)suspensions. It follows that if it contains each ${H \mathbb{F}_p}$, it contains each ${H \mathbb{F}_{p^n}}$, and each ${HG}$ for ${G}$ a finitely generated torsion group. It follows that the category contains ${HG}$ for ${G}$ an arbitrary torsion group. Now we have
$\displaystyle I = \varinjlim_{n} \tau_{\geq -n} I,$
where each ${\tau_{\geq -n}}$ has only torsion homotopy groups and is concentrated in ${[-n, 0]}$: it is thus an iterated extension of spectra of the form ${HG}$, ${G}$ torsion. We find
$\displaystyle I \wedge I = \varinjlim_n (\tau_{\geq -n} I ) \wedge I = \varinjlim_n 0 = 0.$
As another example, the $p$-local version of $I$ provides an example of a spectrum whose Bousfield class is strictly smaller than that of $H \mathbb{F}_p$.
### 2 Responses to “A spectrum acyclic with respect to itself”
1. Matthew Emerton Says:
January 3, 2013 at 1:00 am
Dear Akhil,
In the final displayed equation of section 1, is there a wedge square missing on the left-hand side?
Regards,
Matt
1. Akhil Mathew Says:
January 3, 2013 at 9:17 am
Thanks for the correction!
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 78, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8954631090164185, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/162992/laplace-transform
|
# Laplace transform
I want to find the Laplace transform of the following signal but I don't know what to do with the absolute value.
$$x(t)=e^{-|t|}\; u(t+1)$$
The first thing it came to my mind is to split in negative and positive sides and then find each one and add them. The problem is that I checked back to the solutions and it not the same.
Any ideas?
-
Isn't $|-t|=|t|$ for all $t$? – Babak S. Jun 25 '12 at 20:02
@BabakSorouh I fixed it. Thanks. – jeanleauveux22 Jun 25 '12 at 20:04
What LT are you considering? Two sided? – Peter Tamaroff Jun 25 '12 at 20:15
@PeterTamaroff Yes two sided. – jeanleauveux22 Jun 25 '12 at 20:17
## 2 Answers
Since you're considering the two sided Laplace Transform you need to evaluate
$$\mathcal L(s)=\int_{-\infty}^\infty e^{-|t|} \theta(t+1) e^{- st}dt$$
Since $\theta(t+1)=0$ for $t<-1$ you integral becomes
$$\mathcal L(s)=\int_{-1}^\infty e^{-|t|} e^{- st}dt$$
Now we consider that for $(-1,0)$, $-|t|=t$, and for $(1,\infty)$, $-|t|=-t$, so that
$$\mathcal L(s)=\int_{-1}^0 e^{t} e^{- st}dt+\int_{1}^\infty e^{-t} e^{- st}dt$$
Can you take it from there?
-
Yes I understand. What's different with the right sided LT? – jeanleauveux22 Jun 25 '12 at 20:31
@jeanleauveux22 The "right" sided is usually called "one sided" and is $\mathcal L(s) =\int_0^\infty \theta(t)f(t) e^{-st}dt$ where $\theta$ is the Heaviside step function. – Peter Tamaroff Jun 26 '12 at 0:15
$$e^{-|t|} = e^{t}u(-t)+e^{-t}u(t)$$ $$x(t)=e^{-|t|} u(t+1)=e^{t}[u(t+1)-u(t)]+e^{-t}u(t)$$ Two Side Laplace Transform : $$X(S)= \int_{-\infty}^{\infty} x(t)e^{-st}dt$$ $$X(S)=\int_{-1}^0 e^t e^{-st}dt + \int_0^{\infty} e^{-t} e^{-st}dt$$ $$X(S)=\int_{-1}^0 e^{-(s-1)t}dt + \int_0^{\infty} e^{-(s+1)t}dt$$ $$X(S)= \frac{-1} {s-1}e^{-(s-1)t} |_{-1}^0 + \frac{-1} {s+1}e^{-(s+1)t}|_0^{\infty}$$ $$X(S)=\frac{-1+e^{s-1}} {s-1} +\frac{1} {s+1}$$
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 11, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8789023756980896, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/116739?sort=oldest
|
## Generating spatially-aware degree-preserving random graphs?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
In the study of biological neural networks, researchers sometimes compare hypotheses vs. a degree-preserving random null model. One major criticism against this approach is that connections in neural networks are greatly affected by their location in 3D Euclidean space (whereas the null model isn't).
Question: Is there a random graph model that both
• preserves vertex degrees, and
• accounts for the 3D location of vertices in space (i.e., nearby vertices are more likely to be connected than distant vertices)?
I'd be interested in both directed or undirected graphs (usually they can be adapted to suit, anyway).
-
## 1 Answer
I can make you a directed model! Let's specify the out-degree of each vertex: $o_x$ is the out-degree of vertex $x$.
Choose yourself an increasing function $F(d)$ that encodes a distance penalty. For each pair of vertices $x$ and $y$, let $A_{xy}=F(\|x-y\|)U_{xy}$. The edges leaving vertex $x$ are then the $\vec{xy}$ corresponding to the $o_x$ smallest values of $A_{xy}$.
Doing the undirected case seems a bit more subtle...
EDIT: Let me expand/change this a bit. For a mathematically interesting model, where you might be able to prove something, you could look at a Gibbs probability distribution. You define an "energy" for each legal configuration (i.e. subgraph of $G$ satisfying the degree constraints). Then, based on the hypothesis that high energy states are unlikely, you assign them low probability.
More specifically, a reasonable approach would be to define the energy of a configuration $\xi$ to be $\Phi(\xi)=\sum_{e\in E(\xi)}F(\|e\|)$. If you let $\Lambda$ be the set of all legal configurations, then the Gibbs measure is defined by $\mathbb P(\xi)=e^{-\Phi(\xi)}/\sum_{\zeta\in\Lambda}e^{-\Phi(\zeta)}$. (The normalization, which is sometimes called the partition function, $Z(\Lambda)$, there makes this a probability measure). The reason that Gibbs measures are nice is that the multiplicative properties of the exponential function ($e^{a+b}=e^ae^b$) lead to some independence properties of the measure you've constructed. For example it's easy to see that if $\xi$ and $\xi'$ are 2 configurations that agree except that $\xi$ contains edges $ab$ and $cd$, while $\xi'$ contains edges $ac$ and $bd$, there's an easily calculated relationship between $\mathbb P(\xi)$ and $\mathbb P(\xi')$.
-
Hmmm, thanks for that. I see what you're doing there, but this wouldn't preserve vertex in-degrees, which would be a disadvantage. (I'll have to think about whether this would be a significant disadvantage for the application I have in mind.) – Douglas S. Stones Dec 19 at 2:27
I guess you could do what I'm doing (putting weights on edges) and then minimize over configurations that satisfy the degree constraints. My guess is the difficulty with this kind of thing is that you might get mathematically intractable models (although it's probably reasonably easy for computers to simulate it). – Anthony Quas Dec 19 at 2:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9486624598503113, "perplexity_flag": "head"}
|
http://psychology.wikia.com/wiki/Random_permutation
|
# Random permutation
Talk0
31,725pages on
this wiki
Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory
A random permutation is a random ordering of a set of objects, that is, a permutation-valued random variable. The use of random permutations is often fundamental to fields that use randomized algorithms. Such fields include coding theory, cryptography, and simulation. A good example of a random permutation is the shuffling of a deck of cards: this is ideally a random permutation of the 52 cards.
One method of generating a random permutation of a set of length n uniformly at random (i.e. each of the n! permutations is equally likely to appear) is to generate a sequence by taking a random number between 1 and n sequentially, ensuring that there is not repetition, and interpreting this sequence {x1, ..., xn} as the permutation
$\begin{pmatrix} 1 & 2 & 3 & \cdots & n \\ x_1 & x_2 & x_3 & \cdots & x_n \\ \end{pmatrix}.$
The above brute-force method will require occasional retries whenever the random number picked is a repeat of a number already selected. A simple algorithm to generate a permutation of n items uniformly at random without retries, known as the Knuth shuffle, is to start with the identity permutation, and then go through the positions 1 through n, and for each position i swap the element currently there with an arbitrarily chosen element from positions i through n, inclusive. It's easy to verify that any permutation of n elements will be produced by this algorithm with probability exactly 1/n!, thus yielding a uniform distribution over all such permutations.
For an account of the probability distribution of the number of fixed points of a uniformly distributed random permutation, see rencontres numbers. That distribution approaches a Poisson distribution with expected value 1 as n grows. In particular, it is an elegant application of the inclusion-exclusion principle to show that the probability that there are no fixed points approaches 1/e. The first n moments of this distribution are exactly those of the Poisson distribution.
See Ewens's sampling formula for a connection with population genetics.
# Photos
Add a Photo
6,465photos on this wiki
• by Dr9855
2013-05-14T02:10:22Z
• by PARANOiA 12
2013-05-11T19:25:04Z
Posted in more...
• by Addyrocker
2013-04-04T18:59:14Z
• by Psymba
2013-03-24T20:27:47Z
Posted in Mike Abrams
• by Omaspiter
2013-03-14T09:55:55Z
• by Omaspiter
2013-03-14T09:28:22Z
• by Bigkellyna
2013-03-14T04:00:48Z
Posted in User talk:Bigkellyna
• by Preggo
2013-02-15T05:10:37Z
• by Preggo
2013-02-15T05:10:17Z
• by Preggo
2013-02-15T05:09:48Z
• by Preggo
2013-02-15T05:09:35Z
• See all photos
See all photos >
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8511840105056763, "perplexity_flag": "middle"}
|
http://stats.stackexchange.com/questions/36050/how-to-determine-which-variables-are-dependent-or-independent
|
# How to determine which variables are dependent or independent?
I'm new to this, and am struggling with the concept of how to determine what is the independent variable and what is the dependent variable.
Here's an example that I think would go a long way for me: I'm studying baseball, specifically the home run rate for all players over the course of the season. Is the player the independent variable, and the home run rate dependent on the player?
-
Look at it this way: how could you formulate the relationship in the other direction? Do you have in mind, say, a mathematical formula that will give you a player's name when you plug in their home run rate? And if you somehow succeeded in doing that, what would it tell you (that just looking up the data in a table doesn't)? – whuber♦ Sep 10 '12 at 19:42
## 2 Answers
There are two sides to this question: the mathematical aspect, and the causal (inferential) aspect.
In the mathematical sense, a dependent variable y is a function f of an independent variable x: $y = f(x), f \subset X \times Y$. In this sense, the set X is the set of all players, and the set Y is the set of home run rates. The function f connects the chosen player and his (or her) home run rate. However, this relation can be inversed by use of the inverse function $f^{-1}$, in the sense of $y=f(x) \Leftrightarrow x = f^{-1}(y)$
However, I understand your question as pertaining to the inferential structure. In terms of your question, what depends on what? Clearly, the home run rate depends on the player, because it is a function of some characteristics of the player - his speed, striking prowess, etc. In that sense, the data itself is imbued by a causal structure: if the player changed - say, went into heavy training or used enhancing drugs - he or she could improve performance.
I hope this goes somewhere in answering your question.
-
Think of this in terms of a regression function y=a$_1$ X$_1$+a$_2$ X$_2$+...+a$_k$ X$_k$. The X$_i$s are the independent variables used to predict the value of y. You can think of the result for y as depending on the values of the Xs. That is why y is called the dependent variable. The X$_i$s are called the independent variables because in collecting the data we choose their values independently and look to see what we get for y. So in your example the player home run percentage is the dependent variable and the independent variables would be characteristics that the player has which would help determine whther the percentage is high, low or somewhere in the middle.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9569275975227356, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/209721/how-can-i-reconstruct-a-polynomial-with-access-to-its-roots-and-not-to-some-poin
|
# How can i reconstruct a polynomial with access to its roots and not to some points evaluation of it?
Is there any other element that can let me interpolate a polynomial other than the well known lagrange interpolation by which a polynomial of degree $d$ can be reconstructed from $d+1$ pairs: $x_i,f(x_i)$. If i have another kind of information which is related with the polynomial, ie: some roots can i reconstruct it?
-
1
Aren't roots also points of evaluation: $(x_i,0)$? – draks ... Oct 9 '12 at 8:22
right. But is there something special on the evaluation on 0 that makes it sufficient to reconstruct it with less than d+1 points? – curious Oct 9 '12 at 8:23
## 2 Answers
If you know all the roots of a polynomial, then you can almost reconstruct it. If $a_1,\dots,a_n$ are the roots, then the polynomials with exactly those roots are of the form $c(x-a_1)(x-a_2)\cdots (x-a_n)$ for some constant $c$ (assuming we really know all roots, ie that we are not necessarily limiting ourselves to the real roots or something like that).
-
1
To be pricise, this answer requires that you know all the roots and their multiplicities, and that this includes roots that lie in an extension of the ground field (so for a real polynomial, you need to know the complex roots, and their multiplicities, as well). – Marc van Leeuwen Oct 9 '12 at 9:23
I believe the polynomial is uniquely characterized by its coefficients, from which in an order n case you will have n+1 . Identifying the polynomial on which a set of sampling points lie thus corresponds to the estimation of its coefficients that can be achieved by solving a linear system of equations (polynomial regression). In order for the system to have a unique solution the system needs to have a rank of n+1 in case of a degree n polynomial (the matrix may not be square assuming the presence of redundant rows as a result of an overdetermined system, assuming no noise having more than n+1 points results in redundancies). For this reason in the general case you need at least n+1 different points to be able to find the solution.
Peter
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9501233696937561, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/139594/calculus-optimisation/140037
|
# Calculus optimisation
This meant to be a relatively easy problem but I cannot get my head around it. It is from Burkill's "First course in Analysis", book $4$(f), $10$.
An open bowl is in the form of a segment of a sphere of metal of negligible thickness. Find the shape of the bowl if its volume is the greatest for a given area of metal. (Solution: Hemisphere)
Could anyone help me with the solution of the problem?
Here is one of my attempts. I assumed that the problem is circumferentially symmetric so I considered the planar problem instead. I took the area of the segment of a disk with radius R, central angle $\theta$, and area A which I calculated as follows:
$$A = \text{sector area} - \text{area of triangle} = \frac{R^2\pi}{2\pi} \theta - 2 \frac{1}{2} R \cos\left(\frac{\theta}{2}\right) R\sin\left(\frac{\theta}{2}\right) = \frac{R^2\theta}{2}-\frac{R^2}{2}\sin\theta.$$
This is constrained by the area that is the length of material we have say $L$:
$$L = R\theta.$$
Substituting in for $R$:
$$A = \frac{L^2}{2\theta} - \frac{L^2}{2\theta^2} \sin\theta.$$
Differentiate to find turning value:
$$\frac{dA}{d\theta} = \frac{L^2}{2}\left(-\frac{1}{\theta^2} + \frac{2}{\theta^3} \sin\theta + \frac{1}{\theta^2} \cos\theta\right) = 0.$$
I am bit stuck now how to get $\theta$ out of this and I am questioning whether my method is really correct. Could anyone help me out? Thank you!
-
1
The sine and cosine functions are denoted `\sin` and `\cos` in $\TeX$, respectively. – Brett Frankel May 1 '12 at 20:30
Thanks for the edit! – adamG May 1 '12 at 21:23
2
– copper.hat May 1 '12 at 21:42
## 2 Answers
I think the problem implies that the bowl is obtained by cutting a sphere with a plane, so that the bowl is indeed rotationally symmetric. Let $2 \theta$ be the aperture of the bowl, with $0 < \theta < \pi$. Then the area of the metal is $A(R,\theta) = 4 \pi R^2 \sin^2\left(\frac{\theta}{2}\right)$.
The volume of liquid such a bowl could hold is given by an integral, obtained by shell method: $$\begin{eqnarray} V_\theta &=& \int_{R \cos \theta}^R A(\rho, \arccos\left( \frac{R \cos\theta}{\rho} \right)) \mathrm{d} \rho = \int_{R \cos \theta}^R 4 \pi \rho^2 \sin^2\left(\frac{1}{2} \arccos\left( \frac{R \cos\theta}{\rho} \right)\right) \rho \mathrm{d} \rho \\ &=& \int_{R \cos\theta}^R 2 \pi \rho \left( \rho - R \cos(\theta) \right) \mathrm{d} \rho = \frac{4}{3} \pi R^3 \left( 3 - 2 \sin^2\left(\frac{\theta}{2}\right) \right) \sin^4\left(\frac{\theta}{2}\right) = \frac{4}{3} \pi R^3 \left( 3 - 2 \frac{A}{4 \pi R^2} \right) \frac{A^2}{16 \pi^2 R^4} \end{eqnarray}$$ The above expression is maximal for $A = 4 \pi R^2$, meaning $\theta = \pi$, i.e. exactly the hemisphere.
-
Thank you Sasha! It is interesting how many ways one can interpret this problem! – adamG May 2 '12 at 21:34
It is useful to know the (curved) surface area and the volume of a spherical cap. These are obtainable by straightforward integration, or more nicely by arguments that go back to Archimedes.
If the radius of the sphere the cap was cut from is $r$, and the maximum depth of the cap is $h$, then the spherical cap has curved surface area $2\pi rh$ and volume $\frac{\pi h^2}{3}(3r-h)$.
Let $A=2\pi k$ be the surface area. We want to maximize $h^2(3r-h)$ given that $2\pi rh =A=2\pi k$. So $r=\frac{k}{h}$.
We therefore want to maximize $h^2\left(\frac{3k}{h}-h\right)$, which is $3kh-h^3$. This is a very easy calculus problem. The maximum is reached when $h=\sqrt{k}$. The corresponding $r$ is $\frac{k}{\sqrt{k}}=\sqrt{k}$.
So the maximum is reached when $r=h$, that is, when the bowl is a hemisphere.
-
Thank you Andre! Simple solution! You interpret the shape as not a spherical segment but rather a spherical cap despite the wording of the problem, which is slightly confusing I must admit. Nevertheless, this is the closest to my approach to the problem. – adamG May 2 '12 at 21:52
1
– Sasha May 3 '12 at 5:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9380455017089844, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/96404/references-surveys-concerning-characteristic-classes-of-flat-vector-bundles
|
## References/surveys concerning characteristic classes of flat vector bundles
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I'm looking for good surveys about characteristic classes of flat real vector bundles. Letting $G$ be $\text{SL}_n(\mathbb{R})$ with the discrete topology, orientable flat $n$-dimensional real vector bundles are classified by $BG$, so the characteristic classes I'm looking for are elements of the cohomology of $BG$ with respect to various systems of coefficients.
The one source I know of is Morita's book "The geometry of characteristic classes", but this is not very comprehensive or up-to-date. I also have found many references about Chern-Simons invariants, but they all seem to be written from the perspective of algebraic geometry or mathematical physics or differential geometry. I'd really like a source that is as topological as possible; for instance, for ordinary characteristic classes I prefer the approach taken in Milnor-Stashef to the approach via Chern-Weil theory. But any sources at all are welcome.
-
## 2 Answers
I am not aware of much recent activity (after 2001, when Morita wrote his book) in this field, so I think it is still very valuable. Chern-Simons theory means something rather different today. The useful information on characteristic classes of flat vector bundles (CCFVB) seems to be scattered throughout the literature; one of the reasons might be that there is no complete picture known. Googling gives you a lot of sources, but I completely agree that a modern detailed and comprehensive exposition is missed.
Also, as far as I know, Chern-Weil theory is an indispensible ingredient of the theory. Purely topological methods are not very sensitive to flat bundles while Chern-Weil theory allows you to import methods from Lie algebra cohomology etc.
Here is a list of texts from which I learnt something on the subject:
Karoubi: Homologie cyclique et K-Theorie algebrique.
Jones, Westbury: Homology spheres, eta-invariants and algebraic K-theory. The first few sections form an informative survey which is worth reading even if you are willing to ignore the main problem they consider.
Dupont: Curvature and characteristic classes (the last section)
Kamber, Tondeur: Foliated bundles and characteristic classes
A modern approach to CCFVB is to use differential cohomology (or differential characters). If you google, you find more information on that.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I think the names of Johan Louis Dupont ( http://home.imf.au.dk/dupont/ ), Chih-Han Sah (1934--1997) (SUNY Stony Brook) are appropriate. Dupont's homepage contains related articles e.g.
Homology of O(n) and O^1(1,n) made discrete; an application of edgewise subdivision by J. Dupont, M. Bökstedt and Morten Brun, J. Pure Appl. Algebra, 123 (1998), 131-152.
and many of his papers are in arXiv, e.g.
Regulators and characteristic classes of flat bundles Johan Dupont, Richard Hain, Steven Zucker http://arxiv.org/abs/alg-geom/9202023
The Lie groups made discrete arise in algebraic K-theory, for small n the groups K_n are related to "scissors congruence groups" (as far as I understand), see let me quote http://reh.math.uni-duesseldorf.de/~topologie/scissors/
"The basic reference for the school is the monograph [D] J.L. Dupont, Scissors congruences, group homology and characteristic classes, World Scientific.
Further references occuring below are
[M] Milnor, On the homology of Lie groups made discrete. Commentarii Mathematici Helvetici, Vol. 58, No. 1, 72--85, 1983
[S] Suslin, A.A., Algebraic K-theory of fields. Proceedings of the International Congress of Mathematicians, Vol. 1, 2 (Berkeley, Calif., 1986), 222--244, Amer. Math. Soc., Providence, RI, 1987"
And further quote:
"Scissors congruences
Two polytopes in euclidean n-space are called scissors congruent if they can be subdivided into finitely many pieces such that each piece in the first polytope is congruent to exactly one piece in the second polytope.
Elementary geometric considerations show that polytopes in the plane are scissors congruent if and only if they have the same area. Hilbert's 3rd problem was the question whether volume determines the scissors congruence class also in 3-space. The answer was given by Max Dehn almost immediately: In 1900, he described an invariant with values in R ⊗Z R/Z which shows that the answer is no. Only 1965 J. P. Sydler proved that volume and Dehn invariant together determine the scissors congruence class in 3-space. Higher dimensional analogues are still unsolved. There are variants for spherical and hyperbolic geometry, which are open even in dimension 3.
From a modern point of view, these classical questions are closely related to the computation of the homology of Lie groups considered as discrete groups. Furthermore there are interesting connections to deep questions about the algebraic K-theory of the complex numbers."
See also:
http://en.wikipedia.org/wiki/Hilbert%27s_third_problem
P. Cartier, Décomposition des polyèdres : le point sur le troisième problème de Hilbert, Séminaire Bourbaki, 1984-85, n° 646, p. 261—288.
Hilbert’s 3rd problem and Invariants of 3-manifolds by Walter Neumann, G&T Monographs, 1998.
http://www.emis.de/journals/GT/GTMon1/paper19.abs.html
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8955307006835938, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/59252?sort=oldest
|
geometric realization on $\mathbf{sTop}$
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Is geometric realization $|\cdot|:\mathbf{Top}^{\mathbf{\Delta}^{\textrm{op}}}\rightarrow \mathbf{Top}$ a left Quillen functor? If so, under what model structure on $\mathbf{Top}^{\mathbf{\Delta}^{\textrm{op}}}$? I would guess the Reedy model structure.
A reference would be ideal.
Thanks
-
Shouldn't it be "left" instead of "right"? – Tom Goodwillie Mar 23 2011 at 0:05
yeah left. oops, thanks! – Alan Wilder Mar 23 2011 at 5:51
1 Answer
The results on homotopy invariance of the geometric realization of simplicial spaces go back at least to May's The Geometry of Iterated Loop Spaces, although he doesn't explicitly mention model categories. It is indeed true that the geometric realization of simplicial spaces is a left Quillen functor with respect to the Reedy model structure. One reference is Proposition VII.3.6 of Simplicial Homotopy Theory by Goerss and Jardine. There is also a survey of related results in nLab.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8638362288475037, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/243125/proving-that-the-matrix-is-not-invertible
|
# Proving that the matrix is not invertible.
A is a 2x3 matrix and B is a 3x2. How can i prove that the matrix D = AB is not invertible. I could not go further in this problem. The only thing that i have found is the multiply of these two matrix will be 2x2 matrix but how can i find it is not invetible?
-
1
Actually no. If $A$ and $B$ have your given dimensions, then $$\begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ \end{bmatrix} \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ 0 & 0 \\ \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \end{bmatrix}$$ – Patrick Da Silva Nov 23 '12 at 11:52
But i should prove that it is not. Let's take them as A 365x54 and B is 54x365 – Yigit Can Nov 23 '12 at 11:55
@AD.: Yes, that's precisely what I mean. – Patrick Da Silva Nov 23 '12 at 11:56
@YigitCan So, can you specify exactly what the problem is? – AD. Nov 23 '12 at 11:57
2
@Yigit : In your new example you switched the conditions ; if $A$ has size $m \times n$ and $B$ has size $n \times m$, the statement is true for all matrices only when $n > m$, i.e. when the transformation $T_B$ in my answer cannot be surjective. – Patrick Da Silva Nov 23 '12 at 11:57
## 3 Answers
The statement would be true if you considered $D = BA$.
You can see that the matrix $A$ gives rise to a transformation $T_A : \mathbb R^3 \to \mathbb R^2$. Similarly, the matrix $B$ gives rise to $T_B : \mathbb R^2 \to \mathbb R^3$ and $T_D = T_B \circ T_A : \mathbb R^3 \to \mathbb R^3$. The problem with $T_D$ is that $$\mathrm{Im}(T_D) \subseteq \mathrm{Im}(T_B)$$ and $T_B$ cannot be surjective because the image of a basis in $\mathbb R^2$ can span at most a subspace of $\mathbb R^3$ of dimension $2$, not $3$.
Hope that helps,
-
in general if $A$ is a $m\times n$ matrix and $B$ is a $n\times m$ matrix with $n < m$ then $AB$ cannot be invertible.
results used:
a matrix $A$ is invertible iff $Ax = 0$ has only trivial solution.
$A$ is a $m\times n$ matrix with $m < n$ then $Ax=0$ has non trivial solution.
there is nontrivial $x_0$ such that such that $Bx_0=0$ hence $AB(x_0) = 0$ as a result $AB$ cannot be invertible
-
[assuming $n<m$] Recall that $\text{rank}( A ),\text{rank}(B)\leqslant\min\{n,m\}$, and $\text{rank}(AB)\leqslant\min\{\text{rank}A,\text{rank}B\}$. The product is a $m\times m$ matrice with rank $\leqslant n < m$, and therefore cannot be invertible.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 41, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9225342273712158, "perplexity_flag": "head"}
|
http://www.nag.com/numeric/cl/nagdoc_cl23/html/G13/g13bjc.html
|
# NAG Library Function Documentnag_tsa_multi_inp_model_forecast (g13bjc)
## 1 Purpose
nag_tsa_multi_inp_model_forecast (g13bjc) produces forecasts of a time series (the output series) which may depend on one or more other (input) series via a previously estimated multi-input model. The future values of any input series must be supplied. Standard errors of the forecasts are produced. If future values of some of the input series have been obtained as forecasts using ARIMA models for those series, this may be allowed for in the calculation of the standard errors.
## 2 Specification
#include <nag.h>
#include <nagg13.h>
void nag_tsa_multi_inp_model_forecast (Nag_ArimaOrder *arimav, Integer nseries, Nag_TransfOrder *transfv, double para[], Integer npara, Integer nev, Integer nfv, const double xxy[], Integer tdxxy, double rmsxy[], const Integer mrx[], Integer tdmrx, const double parx[], Integer ldparx, Integer tdparx, double fva[], double fsd[], Nag_G13_Opt *options, NagError *fail)
## 3 Description
nag_tsa_multi_inp_model_forecast (g13bjc) has two stages. The first stage is essentially the same as a call to the model estimation function nag_tsa_multi_inp_model_estim (g13bec), with zero iterations. In particular, all the arguments remain unchanged in the supplied input series transfer function models and output noise series ARIMA model except that a further iteration takes place for any $\omega $ corresponding to a simple input. The internal nuisance arguments associated with the pre-observation period effects of the input series are estimated where requested, and so are any backforecasts of the output noise series. The output components ${z}_{t}$ and ${n}_{t}$, and residuals ${a}_{t}$ are calculated exactly as described in the Section 3 of nag_tsa_multi_inp_model_estim (g13bec).
In the second stage, the forecasts of the output series ${y}_{t}$ are calculated for $t=n+1,n+2,\dots ,n+L$ where $n$ is the latest time point of the observations and $L$ is the maximum lead time of the forecasts.
First the new values, ${x}_{t}$ for any input series are used to form the input components ${z}_{t}$ for $t=n+1,n+2,\dots ,n+L$ using the transfer function models:
(a) ${z}_{t}={\delta }_{1}{z}_{t-1}+{\delta }_{2}{z}_{t-2}+\cdots +{\delta }_{p}{z}_{t-p}+{\omega }_{0}{x}_{t-b}-{\omega }_{1}{x}_{t-b-1}-\cdots -{\omega }_{q}{x}_{t-b-q}$. The output noise component ${n}_{t}$ for $t=n+1,n+2,\dots ,n+L$ is then forecast by setting ${a}_{t}=0$ for $t=n+1,n+2,\dots ,n+L$ and using the ARIMA model equations:
(b) ${e}_{t}={\varphi }_{1}{e}_{t-1}+{\varphi }_{2}{e}_{t-2}+\cdots +{\varphi }_{p}{e}_{t-p}+{a}_{t}-{\theta }_{1}{a}_{t-1}-{\theta }_{2}{a}_{t-2}-\cdots -{\theta }_{q}{a}_{t-q}$.
(c) ${w}_{t}={\Phi }_{1}{w}_{t-s}+{\Phi }_{2}{w}_{t-2×s}+\cdots +{\Phi }_{P}{w}_{t-P×s}+{e}_{t}-{\Theta }_{1}{e}_{t-s}-{\Theta }_{2}{e}_{t-2×s}-\cdots -{\Theta }_{Q}{e}_{t-Q×s}$.
(d) ${n}_{t}={\left({\nabla }^{d}{\nabla }_{s}^{D}\right)}^{-1}\left({w}_{t}+c\right)$.
This last step of ‘integration’ reverses the process of differencing. Finally the output forecasts are calculated as
$y t = z 1,t + z 2,t + ⋯ + z m,t + n t .$
The forecast error variance of ${y}_{t+l}$ (i.e., at lead time $l$) is ${S}_{l}^{2}$, which is the sum of parts which arise from the various input series, and the output noise component. That part due to the output noise is
$sn l 2 = V n × ψ 0 2 + ψ 1 2 + ⋯ + ψ l-1 2$
${V}_{n}$ is the estimated residual variance of the output noise ARIMA model, and ${\psi }_{0},{\psi }_{1},\dots $, are the ‘psi-weights’ of this model as defined in Box and Jenkins (1976). They are calculated by applying the equations (b), (c) and (d) above for $t=0,1,\dots ,L$, but with artificial values for the various series and with the constant $c$ set to 0. Thus all values of ${a}_{t}$, ${e}_{t}$, ${w}_{t}$ and ${n}_{t}$ are taken as zero for $t<0$; ${a}_{t}$ is taken to be 1 for $t=0$ and 0 for $t>0$. The resulting values of ${n}_{t}$ for $t=0,1,\dots ,L$ are precisely ${\psi }_{0},{\psi }_{1},\dots ,{\psi }_{L}$ as required.
Further contributions to ${S}_{l}^{2}$ come only from those input series, for which future values are forecasts which have been obtained by applying input series ARIMA models. For such a series the contribution is
$sz l 2 = V x × ν 0 2 + ν 1 2 + ⋯ + ν l-1 2$
${V}_{x}$ is the estimated residual variance of the input series ARIMA model. The coefficients ${\nu }_{0},{\nu }_{1},\dots $ are calculated by applying the transfer function model equation (a) above for $t=0,1,\dots ,L$, but again with artificial values of the series. Thus all values of ${z}_{t}$ and ${x}_{t}$ for $t<0$ are taken to be zero, and ${x}_{0},{x}_{1},\dots $ are taken to be the psi-weight sequence ${\psi }_{0},{\psi }_{1},\dots $ for the input series ARIMA model. The resulting values of ${z}_{t}$ for $t=0,1,\dots ,L$ are precisely ${\nu }_{0},{\nu }_{1},\dots ,{\nu }_{L}$ as required.
In adding such contributions ${sz}_{l}^{2}$ to ${sn}_{l}^{2}$ to make up the total forecast error variance ${S}_{l}^{2}$, it is assumed that the various input series with which these contributions are associated, are statistically independent of each other.
When using the function in practice an ARIMA model is required for all the input series. In the case of those inputs for which no such ARIMA model is available (or its effects are to be excluded), the corresponding orders and arguments and the estimated residual variance should be set to zero.
## 4 References
Box G E P and Jenkins G M (1976) Time Series Analysis: Forecasting and Control (Revised Edition) Holden–Day
## 5 Arguments
1: arimav – Nag_ArimaOrder *
Pointer to structure of type Nag_ArimaOrder with the following members:
p – Integer
d – IntegerInput
q – IntegerInput
bigp – IntegerInput
bigd – IntegerInput
bigq – IntegerInput
s – IntegerInput
On entry: these seven members of arimav must specify the orders vector $\left(p,d,q,P,D,Q,s\right)$, respectively, of the ARIMA model for the output noise component.
$p$, $q$, $P$ and $Q$ refer, respectively, to the number of autoregressive $\left(\varphi \right)$, moving average $\left(\theta \right)$, seasonal autoregressive $\left(\Phi \right)$ and seasonal moving average $\left(\Theta \right)$ arguments.
$d$, $D$ and $s$ refer, respectively, to the order of non-seasonal differencing, the order of seasonal differencing and the seasonal period.
Constraints:
• $p$, $d$, $q$, $P$, $D$, $Q$, $s\ge 0$;
• $p+q+P+Q>0$;
• $s\ne 1$;
• if $s=0$, $P+D+Q=0$;
• if $s>1$, $P+D+Q>0$;
• $d+s×\left(P+D\right)\le n$;
• $p+d-q+s×\left(P+D-Q\right)\le n$.
2: nseries – IntegerInput
On entry: the number of input and output series. There may be any number of input series (including none), but only one output series.
Constraint: ${\mathbf{nseries}}>1$ if there are no arguments in the model (that is $p=q=P=Q=0$ and ${\mathbf{options}}\mathbf{.}{\mathbf{cfixed}}=\mathrm{Nag_TRUE}$), ${\mathbf{nseries}}\ge 1$ otherwise.
3: transfv – Nag_TransfOrder *
Pointer to structure of type Nag_TransfOrder with the following members:
b – Integer *Input
q – Integer *Input
p – Integer *
r – Integer *Input
On entry: before use these member pointers must be allocated memory by calling nag_tsa_transf_orders (g13byc) which allocates ${\mathbf{nseries}}-1$ elements to each pointer. The memory allocated to these pointers must be given the transfer function model orders $b$, $q$ and $p$ of each of the input series. The order arguments for input series $i$ are held in the $i$th element of the allocated memory for each pointer. $\mathbf{transfv}\mathbf{\to }\mathbf{b}\left[i-1\right]$ holds the value ${b}_{i}$, $\mathbf{transfv}\mathbf{\to }\mathbf{q}\left[i-1\right]$ holds the value ${q}_{i}$ and $\mathbf{transfv}\mathbf{\to }\mathbf{p}\left[i-1\right]$ holds the value ${p}_{i}$.
For a simple input, ${b}_{i}={q}_{i}={p}_{i}=0$.
$\mathbf{transfv}\mathbf{\to }\mathbf{r}\left[i-1\right]$ holds the value ${r}_{i}$, where ${r}_{i}=1$ for a simple input, and ${r}_{i}=2$ or 3 for a transfer function input.
The choice ${r}_{i}=3$ leads to estimation of the pre-period input effects as nuisance arguments, and ${r}_{i}=2$ suppresses this estimation. This choice may affect the returned forecasts.
When ${r}_{i}=1$, any nonzero contents of the $i$th element of the memory of $\mathbf{transfv}\mathbf{\to }\mathbf{b}$, $\mathbf{transfv}\mathbf{\to }\mathbf{q}$ and $\mathbf{transfv}\mathbf{\to }\mathbf{p}$ are ignored.
Constraint: $\mathbf{transfv}\mathbf{\to }\mathbf{r}\left[\mathit{i}-1\right]=1$, $2$ or $3$, for $\mathit{i}=1,2,\dots ,{\mathbf{nseries}}-1$
The memory allocated to the members of transfv must be freed by a call to nag_tsa_trans_free (g13bzc)
4: para[npara] – doubleInput/Output
On entry: estimates of the multi-input model arguments. These are in order firstly the ARIMA model arguments: $p$ values of $\varphi $ arguments, $q$ values of $\theta $ arguments, $P$ values of $\Phi $ arguments, $Q$ values of $\Theta $ arguments. These are followed by the transfer function model argument values ${\omega }_{0},{\omega }_{1},\dots ,{\omega }_{{q}_{1}}$, and ${\delta }_{1},{\delta }_{2},\dots ,{\delta }_{{p}_{1}}$ for the first of any input series and similarly for each subsequent input series. The final component of para is the value of the constant $c$.
On exit: the input estimates are unaltered except that any $\omega $ estimates corresponding to a simple input will be undated by a single iteration.
5: npara – IntegerInput
On entry: the exact number of $\varphi $, $\theta $, $\Phi $, $\Theta $, $\omega $, $\delta $, $c$ arguments, so that ${\mathbf{npara}}=p+q+P+Q+{\mathbf{nseries}}+\sum \left({p}_{i}+{q}_{i}\right)$, the summation being over all the input series. ($c$ must be included whether its value was previously estimated or was set fixed.)
6: nev – IntegerInput
On entry: the number of original (undifferenced) values in each of the input and output time-series.
7: nfv – IntegerInput
On entry: the number of forecast values of the output series required.
Constraint: ${\mathbf{nfv}}>0$.
8: xxy[$\left({\mathbf{nev}}+{\mathbf{nfv}}\right)×{\mathbf{tdxxy}}$] – const doubleInput
Note: the $\left(i,j\right)$th element of the matrix is stored in ${\mathbf{xxy}}\left[\left(i-1\right)×{\mathbf{tdxxy}}+j-1\right]$.
On entry: the columns of xxy must contain in the first nev places, the past values of each of the input and output series, in that order. In the next nfv places, the columns relating to the input series (i.e., columns 0 to ${\mathbf{nseries}}-2$) contain the future values of the input series which are necessary for construction of the forecasts of the output series $y$.
9: tdxxy – IntegerInput
On entry: the stride separating matrix column elements in the array xxy.
Constraint: ${\mathbf{tdxxy}}\ge {\mathbf{nseries}}$.
10: rmsxy[nseries] – doubleInput/Output
On entry: elements of ${\mathbf{rmsxy}}\left[0\right]$ to ${\mathbf{rmsxy}}\left[{\mathbf{nseries}}-2\right]$ must contain the estimated residual variance of the input series ARIMA models. In the case of those inputs for which no ARIMA model is available or its effects are to be excluded in the calculation of forecast standard errors, the corresponding entry of rmsxy should be set to 0.
On exit: ${\mathbf{rmsxy}}\left[{\mathbf{nseries}}-1\right]$ contains the estimated residual variance of the output noise ARIMA model which is calculated from the supplied series. Otherwise rmsxy is unchanged.
11: mrx[$7×{\mathbf{tdmrx}}$] – const IntegerInput
On entry: the orders array for each of the input series ARIMA models. Thus, column $i-1$ contains values of $p$, $d$, $q$, $P$, $D$, $Q$, $s$ for input series $i$. In the case of those inputs for which no ARIMA model is available, the corresponding orders should be set to 0.
If there are no input series then the null pointer (Integer *)0 may be supplied in place of mrx.
12: tdmrx – IntegerInput
On entry: the stride separating matrix column elements in the array mrx.
Constraint: ${\mathbf{tdmrx}}\ge {\mathbf{nseries}}-1$.
13: parx[${\mathbf{ldparx}}×{\mathbf{tdparx}}$] – const doubleInput
Note: the $\left(i,j\right)$th element of the matrix is stored in ${\mathbf{parx}}\left[\left(i-1\right)×{\mathbf{tdparx}}+j-1\right]$.
On entry: values of the arguments ($\varphi $, $\theta $, $\Phi $, and $\Theta $) for each of the input series ARIMA models. Thus column $i$ contains ${\mathbf{mrx}}\left[\left(0\right)×{\mathbf{tdmrx}}+i\right]$ values of $\varphi $, ${\mathbf{mrx}}\left[\left(2\right)×{\mathbf{tdmrx}}+i\right]$ values of $\theta $, ${\mathbf{mrx}}\left[\left(3\right)×{\mathbf{tdmrx}}+i\right]$ values of $\Phi $ and ${\mathbf{mrx}}\left[\left(5\right)×{\mathbf{tdmrx}}+i\right]$ values of $\Theta $ – in that order.
Values in the columns relating to those input series for which no ARIMA model is available are ignored.
If there are no input series then the null pointer (double *)0 may be supplied in place of parx.
14: ldparx – IntegerInput
On entry: the maximum number of arguments in any of the input series ARIMA models. If there are no input series then ldparx is not referenced.
Constraint: ${\mathbf{ldparx}}\ge nce=\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,\left({\mathbf{mrx}}\left[\left(0\right)×{\mathbf{tdmrx}}+\mathit{i}\right]+{\mathbf{mrx}}\left[\left(2\right)×{\mathbf{tdmrx}}+\mathit{i}\right]+\text{}{\mathbf{mrx}}\left[\left(3\right)×{\mathbf{tdmrx}}+\mathit{i}\right]+{\mathbf{mrx}}\left[\left(5\right)×{\mathbf{tdmrx}}+\mathit{i}\right]\right)\right)$, for $\mathit{i}=0,1,\dots ,{\mathbf{nseries}}-1$.
15: tdparx – IntegerInput
On entry: the stride separating matrix column elements in the array parx.
Constraint: ${\mathbf{tdparx}}\ge {\mathbf{nseries}}-1$.
16: fva[nfv] – doubleOutput
On exit: the required forecast values for the output series.
17: fsd[nfv] – doubleOutput
On exit: the standard errors for each of the forecast values.
18: options – Nag_G13_Opt *Input/Output
On entry/exit: a pointer to a structure of type Nag_G13_Opt whose members are optional arguments for nag_tsa_multi_inp_model_forecast (g13bjc). If the optional arguments are not required, then the null pointer, G13_DEFAULT, can be used in the function call tonag_tsa_multi_inp_model_forecast (g13bjc). Details of the optional arguments and their types are given below in Section 10.
19: fail – NagError *Input/Output
The NAG error argument (see Section 3.6 in the Essential Introduction).
## 6 Error Indicators and Warnings
NE_2_INT_ARG_LT
On entry, ${\mathbf{ldparx}}=〈\mathit{\text{value}}〉$ while $nce=〈\mathit{\text{value}}〉$. These arguments must satisfy ${\mathbf{ldparx}}\ge nce$. (See the expression for $nce$ in Section 5 where ldparx is described).
On entry, ${\mathbf{tdmrx}}=〈\mathit{\text{value}}〉$ while ${\mathbf{nseries}}-1=〈\mathit{\text{value}}〉$. These arguments must satisfy ${\mathbf{tdmrx}}\ge {\mathbf{nseries}}-1$.
On entry, ${\mathbf{tdparx}}=〈\mathit{\text{value}}〉$ while ${\mathbf{nseries}}-1=〈\mathit{\text{value}}〉$. These arguments must satisfy ${\mathbf{tdparx}}\ge {\mathbf{nseries}}-1$.
On entry, ${\mathbf{tdxxy}}=〈\mathit{\text{value}}〉$ while ${\mathbf{nseries}}=〈\mathit{\text{value}}〉$. These arguments must satisfy ${\mathbf{tdxxy}}\ge {\mathbf{nseries}}$.
NE_ALLOC_FAIL
Dynamic memory allocation failed.
NE_ARIMA_TEST_FAILED
On entry, or during execution, one or more sets of the ARIMA ($\varphi $, $\theta $, $\Phi $ or $\Theta $) arguments do not satisfy the stationarity or invertibility test conditions.
NE_BAD_PARAM
On entry, argument ${\mathbf{options}}\mathbf{.}{\mathbf{cfixed}}$ had an illegal value.
NE_CONSTRAINT
General constraint: $〈\mathit{\text{value}}〉$.
NE_DELTA_TEST_FAILED
On entry, or during execution, one or more sets of $\delta $ arguments do not satisfy the stationarity or invertibility test conditions.
NE_G13_OPTIONS_NOT_INIT
On entry, the option structure, options, has not been initialized using nag_tsa_options_init (g13bxc).
NE_G13_ORDERS_NOT_INIT
On entry, the orders array structure transfv in function nag_tsa_transf_orders (g13byc) has not been initialized.
NE_INT_ARG_LE
On entry, ${\mathbf{nfv}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{nfv}}>0$.
NE_INT_ARG_LT
On entry, ${\mathbf{nseries}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{nseries}}\ge 1$.
NE_INT_ARRAY_2
Value $〈\mathit{\text{value}}〉$ given to $\mathbf{transfv}\mathbf{\to }\mathbf{r}\left[〈\mathit{\text{value}}〉\right]$ not valid. Correct range for elements if $\mathbf{transfv}\mathbf{\to }\mathbf{r}$ is $1\le \mathbf{transfv}\mathbf{\to }\mathbf{r}\left[i\right]\le 3$.
NE_INTERNAL_ERROR
An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance.
NE_INVALID_NSER
On entry, ${\mathbf{nseries}}=1$ and there are no arguments in the model, i.e., ($p=q=P=Q=0$ and ${\mathbf{options}}\mathbf{.}{\mathbf{cfixed}}=\mathrm{Nag_TRUE}$).
NE_MAT_NOT_POS_DEF
Attempt to invert the second derivative matrix needed in the calculation of the covariance matrix of the parameter estimates has failed. The matrix is not positive definite, possibly due to rounding errors.
NE_NPARA_MR_MT_INCONSIST
On entry, there is inconsistency between npara on the one hand and the elements in the orders structures, arimav and transfv on the other.
NE_NSER_INCONSIST
Value of nseries passed to nag_tsa_transf_orders (g13byc) was $〈\mathit{\text{value}}〉$ which is not equal to the value $〈\mathit{\text{value}}〉$ passed in this function.
NE_SOLUTION_FAIL_CONV
Iterative refinement has failed to improve the solution of the equations giving the latest estimates of the arguments. This occurred because the matrix of the set of equations is too ill-conditioned.
## 7 Accuracy
The computation used is believed to be stable.
## 8 Further Comments
The time taken by nag_tsa_multi_inp_model_forecast (g13bjc) is approximately proportional to the product of the length of each series and the square of the number of arguments in the multi-input model.
## 9 Example
The data in this example relate to 40 observations of an output time series and 5 input time series. This example differs from Section 9 in nag_tsa_multi_inp_model_estim (g13bec) in nag_tsa_multi_inp_model_estim (g13bec) in that there are now 4 simple input series. The output series has one autoregressive $\left(\varphi \right)$ argument and one seasonal moving average $\left(\Theta \right)$ argument. The seasonal period is 4. The transfer function input (the fifth in the set) is defined by orders ${b}_{5}=1$, ${q}_{5}=0$, ${p}_{5}=1$, ${r}_{5}=3$, so that it allows for pre-observation period effects. The initial values of the specified model are:
$ϕ = 0.495 , Θ = 0.238 , ω 1 = -0.367 ω2 = -3.876 ω 3 = 4.516 ω 4 = 2.474 ω 5 = 8.629 δ 1 = 0.688 , c = -82.858 .$
A further eight values of the input series are supplied, and it is assumed that the values for the fifth series have themselves been forecast from an ARIMA model with orders 2 0 2 0 1 1 4, in which ${\varphi }_{1}=1.6743$, ${\varphi }_{2}=-0.9505$, ${\theta }_{1}=1.4605$, ${\theta }_{2}=-0.4862$ and ${\Theta }_{1}=0.8993$, and for which the residual mean square is 0.1720.
The following are computed and printed out: the estimated residual variance for the output noise series, the eight forecast values and their standard errors, and the values of the components ${z}_{t}$ and the output noise component ${n}_{t}$.
### 9.1 Program Text
Program Text (g13bjce.c)
### 9.2 Program Data
Program Data (g13bjce.d)
### 9.3 Program Results
Program Results (g13bjce.r)
## 10 Optional Arguments
A number of optional input and output arguments to nag_tsa_multi_inp_model_forecast (g13bjc) are available through the structure argument options of type Nag_G13_Opt. An argument may be selected by assigning an appropriate value to the relevant structure member and those arguments not selected will be assigned default values. If no use is to be made of any of the optional arguments you should use the null pointer, G13_DEFAULT, in place of options when calling nag_tsa_multi_inp_model_forecast (g13bjc); the default settings will then be used for all arguments.
Before assigning values to options the structure must be initialized by a call to the function nag_tsa_options_init (g13bxc). Values may then be assigned directly to the structure members in the normal C manner.
Options selected by direct assignment are checked within nag_tsa_multi_inp_model_forecast (g13bjc) for being within the required range, if outside the range, an error message is generated.
When all calls to nag_tsa_multi_inp_model_forecast (g13bjc) have been completed and the results contained in the options structure are no longer required; then nag_tsa_free (g13xzc) should be called to free the NAG allocated memory from options.
### 10.1 Optional Arguments Checklist and Default Values
For easy reference, the following list shows the input and output members of options which are valid for nag_tsa_multi_inp_model_forecast (g13bjc) together with their default values where relevant.
Boolean list Nag_TRUE
Boolean cfixed Nag_FALSE
double *zt
double *noise
### 10.2 Description of the Optional Arguments
List – Nag_Boolean Default $\text{}=\mathrm{Nag_TRUE}$
On entry: if ${\mathbf{options}}\mathbf{.}{\mathbf{List}}=\mathrm{Nag_TRUE}$ then the argument settings which are used in the call to nag_tsa_multi_inp_model_forecast (g13bjc) will be printed.
cfixed – Nag_Boolean Default $\text{}=\mathrm{Nag_FALSE}$
On entry: ${\mathbf{options}}\mathbf{.}{\mathbf{cfixed}}$ must be set to Nag_FALSE if the constant was estimated when the model was fitted, and Nag_TRUE if it was held at a fixed value. This only affects the degrees of freedom used in calculating the estimated residual variance.
zt – double * Default memory $\text{}=\left({\mathbf{nev}}+{\mathbf{nfv}}\right)×\left({\mathbf{nseries}}-1\right)$
On exit: this pointer is allocated memory internally with $\left({\mathbf{nev}}+{\mathbf{nfv}}\right)×\left({\mathbf{nseries}}-1\right)$ elements corresponding to $\left({\mathbf{nev}}+{\mathbf{nfv}}\right)$ rows by ${\mathbf{nseries}}-1$ columns. The columns of ${\mathbf{options}}\mathbf{.}{\mathbf{zt}}$ hold the values of the input component series ${z}_{t}$.
noise – double * Default memory $\text{}={\mathbf{nev}}+{\mathbf{nfv}}$
On exit: this pointer is allocated memory internally with ${\mathbf{nev}}+{\mathbf{nfv}}$ elements. It holds the output noise component ${n}_{t}$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 235, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7193745374679565, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/trigonometry/42436-help-hyperbolic-cotangent.html
|
# Thread:
1. ## Help with hyperbolic cotangent
I need some serious help solving for x in terms of known mathematical constants on this one:
$xln(2pi)=arctan(coth(((x(pi))/2))$
Good luck
2. Originally Posted by rman144
I need some serious help solving for x in terms of known mathematical constants on this one:
$xln(2pi)=arctan(coth(((x(pi))/2))$
Good luck
take the tangent of both sides and then use identities, I do not think it will be that bad.
3. ## reply
I know about that, but I cannot find anywhere on the web that gives me an identitiy relating coth and tangent like that. Think you could help me out there?
4. Originally Posted by rman144
I know about that, but I cannot find anywhere on the web that gives me an identitiy relating coth and tangent like that. Think you could help me out there?
I would suggest convertin it into its exponential form.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.961144208908081, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/19827/patching-parametric-integrals-to-degenerate-properly
|
## Patching parametric integrals to degenerate properly
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
While reading the question about fitting all answers to the question of getting a single formula which, in the limit, captures all the answers to $\int x^{a}dx$, I thought of an obvious generalization (of the question):
What are sufficient conditions on a family of functions $f(x,a)$ so that $\int f(x,a) dx$ can be expressed as a single formula $F(x,a)+C(a)$ valid (in the limit) for all a?
From the point of view of FToC, $C(a)$ is completely irrelevant, but nevertheless such an answer has a certain elegance since $C(a)$ really captures the boundary cases of $f(x,a)$.
[Edit: although I like Dylan's answer below, it is not very algorithmic -- can one do better?]
-
## 1 Answer
I don't know what is meant by a "single formula" but if, for instance, $f(x,a)$ is continuous, then $f(x,a) dx$ is a continuous 1-form, so has a well-defined path integral along any reasonable path. The integral will be path dependent if $f(x,a)$ depends at all on $a$, but let's take a path that moves first in the $a$ direction, then in the $x$ direction, i.e., set $F(x,a) = \int_{x_0}^x f(x,a)dx$ for some reasonable choice of basepoint $x_0$. Then if $f$ is differentiable in both variables, by Stokes' theorem $\frac{\partial F}{\partial a}(x,a) = \int_{x_0}^x \frac{\partial f}{\partial a}(x,a)dx$, which is still continuous.
Concretely, for the example $\int x^a dx$, take $x_0 = 1$. Then for $a \ne -1$, we have $\int_1^x x^a dx = \frac{x^{a+1}-1}{a+1}$, which approaches $\log x$ as $a \to -1$. (This is the compound interest limit.)
-
Indeed. So the answer is 1) choose a good base point, 2) 'indefinite integration' is a weird concept anyways, one should use definite integration instead. I have long believed in #2, now I have an even better reason for it. – Jacques Carette Mar 30 2010 at 15:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9471650719642639, "perplexity_flag": "head"}
|
http://mathforum.org/mathimages/index.php?title=Straight_Line_and_its_construction&diff=14427&oldid=14421
|
# Straight Line and its construction
### From Math Images
(Difference between revisions)
| | | | |
|-----------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| | | | |
| Line 39: | | Line 39: | |
| | {{!}}'''Image 3''' shows a patent drawing of an early steam engine. It is of the simplest form with a boiler (on the left), a cylinder with piston (above the boiler), a beam (on top) and a pump (on the right side) at the other end. The pump was usually used to extract water from the mines. | | {{!}}'''Image 3''' shows a patent drawing of an early steam engine. It is of the simplest form with a boiler (on the left), a cylinder with piston (above the boiler), a beam (on top) and a pump (on the right side) at the other end. The pump was usually used to extract water from the mines. |
| | {{!}}- | | {{!}}- |
| - | {{!}}{{HideShowThis|ShowMessage=Click here to show how this engine works.|HideMessage=Click here to hide text|HiddenText=When the piston is at its lowest position, steam is let into the cylinder from valve K and it pushes the piston upwards. Afterward, when the piston is at its highest position, cold water is let in from valve E, cooling the steam in the cylinder and causing the pressure in the the cylinder to drop below the atmospheric pressure. The difference in pressure causes the piston to move downwards. After the piston returns to the lowest position, the whole process is repeated. This kind of steam engine is called "atmospheric" because it utilized atmospheric pressure to cause the downward action of the piston (steam only balances out the atmospheric pressure and allow the piston to return to the highest point). Since in the downward motion, the piston pulls on the beam and in the upward motion, the beam pulls on the piston, the connection between the end of the piston rod and the beam is always in tension (it is being stretched by forces at two ends) and that is why a chain is used as the connection.<ref>Bryant, & Sangwin, 2008, p. 18</ref> <ref>Wikipedia (Steam Engine)</ref>}} | + | {{!}}{{HideShowThis|ShowMessage=Click here to show how this engine works.|HideMessage=Click here to hide text|HiddenText=When the piston is at its lowest position, steam is let into the cylinder from valve K to push the piston upwards. Afterward, when the piston is at its highest position, cold water is let in from valve E, cooling the steam in the cylinder and causing the pressure in the the cylinder to drop below the atmospheric pressure. The difference in pressure causes the piston to move downwards. After the piston returns to the lowest position, the whole process is repeated. This kind of steam engine is called "atmospheric" because it utilizes atmospheric pressure to cause the downward action of the piston. Since in the downward motion, the piston pulls on the beam and in the upward motion, the beam pulls on the piston, the connection between the end of the piston rod and the beam is always in tension (it is being stretched by forces at two ends) and that is why a chain is used as the connection.<ref>Bryant, & Sangwin, 2008, p. 18</ref> <ref>Wikipedia (Steam Engine)</ref>}} |
| | {{!}}- | | {{!}}- |
| - | {{!}}Ideally, the piston moves in the vertical direction and the piston rod takes only axial loading, i.e. forces applied in the direction along the rod. However, from the above picture, it is clear that the end of the piston does not move in a straight line due to the fact that the end of the beam describes an arc of a circle. As a result, horizontal forces are created and subjected onto the piston rod. Consequently, the rate of attrition is very much expedited and the efficiency of the engine is greatly compromised. Now considering that the up-and-down cycle repeats itself hundreds of times every minute and the engine is expected to run 24/7 to make profits for the investors, such defect in the engine must not be tolerated and thus poses a great need for improvements.<ref>Bryant, & Sangwin, 2008, p. 18-21</ref> | + | {{!}}Ideally, the piston moves in the vertical direction and the piston rod takes only axial loading, i.e. forces applied in the direction along the rod. However, from the above picture, it is clear that the end of the piston does not move in a straight line due to the fact that the end of the beam describes an arc of a circle. As a result, horizontal forces are created and subjected onto the piston rod. Consequently, the rate of attrition is very much expedited and the efficiency of the engine is greatly compromised. Durability is important in the design of any machine, but it was especially essential for the early steam engines. For these machines were meant to run 24/7 to make profits for the investors. Therefore, such defect in the engine posed a great need for improvements.<ref>Bryant, & Sangwin, 2008, p. 18-21</ref> |
| | {{!}}} | | {{!}}} |
| | | | |
| | {{{!}} | | {{{!}} |
| - | {{!}}colspan="2"{{!}}Improvements were made. Firstly, "double-action" engines were made, part of which is shown in '''Image 4'''. Secondly, beam was dispensed and replaced by a gear as shown in '''Image 5'''. However, both of these methods were not satisfactory and the need for a linkage that produces straight line action was still imperative. | + | {{!}}colspan="2"{{!}}Improvements were made. Firstly, "double-action" engines were built, part of which is shown in '''Image 4'''. Secondly, the beam was dispensed and replaced by a gear as shown in '''Image 5'''. However, both of these improvements were unsatisfactory and the need for a straight line linkage was still imperative. |
| | {{!}}- | | {{!}}- |
| | {{!}}align="center"{{!}}[[Image:Img325.gif|center|border|500px]]'''Image 4'''<ref>Bryant, & Sangwin, 2008, p. 18-21</ref>{{!}}{{!}}align="center"{{!}}[[Image:Img326.gif|border|center|200px]]'''Image 5''' <ref>Bryant, & Sangwin, 2008, p. 18-21</ref> | | {{!}}align="center"{{!}}[[Image:Img325.gif|center|border|500px]]'''Image 4'''<ref>Bryant, & Sangwin, 2008, p. 18-21</ref>{{!}}{{!}}align="center"{{!}}[[Image:Img326.gif|border|center|200px]]'''Image 5''' <ref>Bryant, & Sangwin, 2008, p. 18-21</ref> |
| | {{!}}- | | {{!}}- |
| - | {{!}}colspan="2"{{!}}{{HideShowThis|ShowMessage=Why those engines were unsatisfactory?|HideMessage=Hide|HiddenText=In '''Image 4''', atmospheric pressure acts in both upward and downward strokes of the engine and two chains were used (one connected to the top of the arched end of the beam and one to the bottom), both of which will take turns to be in tension throughout one cycle. One might ask why chain was used all the time. The answer was simple: to fit the curved end of the beam. However, this does not fundamentally solved the problem and unfortunately created more. The additional chain increased the height of the engine and made the manufacturing very difficult (it was hard to make straight steel bars and rods back then) and costly. | + | {{!}}colspan="2"{{!}}{{HideShowThis|ShowMessage=Why those engines were unsatisfactory?|HideMessage=Hide|HiddenText=In '''Image 4''', atmospheric pressure acts in both upward and downward strokes of the engine and two chains were used (one connected to the top of the arched end of the beam and one to the bottom), both of which will took turns being taut throughout one cycle. One might ask why chain was used all the time. The answer was simple: to fit the curved end of the beam. However, this does not fundamentally solve the straight line problem and unfortunately created more. The additional chain increased the height of the engine and made the manufacturing very difficult (it was hard to make straight steel bars and rods back then) and costly. |
| | | | |
| - | In '''Image 5''', after the beam was replaced by gear actions, the piston rod was fitted with teeth (labeled k) to drive the gear. Theoretically, this solves the problem fundamentally. The piston rod is confined between the guiding wheel at K and the gear, and it moves only in the up-and-down motion. However, the practical problem was still there. The friction and the noise between all the guideways and the wheels could not be ignored, not to mention the increased possibility of failure and cost of maintenance due to additional parts.<ref>Bryant, & Sangwin, 2008, p. 18-21</ref>}} | + | In '''Image 5''', after the beam was replaced by gear actions, the piston rod was fitted with teeth (labeled k) to drive the gear. Theoretically, this solves the problem fundamentally. The piston rod is confined between the guiding wheel at K and the gear, and it moves only in the up-and-down motion. However, the practical problem remained unsolved. The friction and the noise between all the guideways and the wheels could not be ignored, not to mention the increased possibility of failure and cost of maintenance due to additional parts.<ref>Bryant, & Sangwin, 2008, p. 18-21</ref>}} |
| | {{!}}} | | {{!}}} |
| | | | |
| Line 66: | | Line 66: | |
| | | | |
| | =='''The Motion of Point P'''== | | =='''The Motion of Point P'''== |
| - | We intend to described the path of <math>P</math> so that we could show it does not move in a straight line (which is obvious in the animation) and more importantly to pinpoint the position of <math>P</math> using certain parameters we know such as the angle of rotation or one coordinate of point <math>P</math>. This is awfully crucial in engineering as engineers would like to know that there are no two parts of the machine will collide with each other throughout the motion. In addition, you can use the parametrization to create your own animation like that in '''Image 7'''. | + | We intend to describe the path of <math>P</math> so that we can show it does not move in a straight line (which is obvious in the animation). More importantly, this will allow us to pinpoint the position of <math>P</math> using certain parameters we know, such as the angle of rotation or one coordinate of point <math>P</math>. This is awfully crucial in engineering as engineers would like to know that there are no two parts of the machine will collide with each other throughout the motion. In addition, you can use the parametrization to create your own animation like that in '''Image 7'''. |
| | | | |
| | ==='''Algebraic Description'''=== | | ==='''Algebraic Description'''=== |
| | {{{!}} | | {{{!}} |
| - | {{!}}We see that <math>P</math> moves in a stretched figure 8 and will tend to think that there should be a nice {{EasyBalloon|Link=closed form|Balloon=In mathematics, an expression is said to be a closed-form expression if, and only if, it can be expressed analytically in terms of a bounded number of certain "well-known" functions. Typically, these well-known functions are defined to be elementary functions – constants, one variable x, elementary operations of arithmetic (+ – × ÷), nth roots, exponent and logarithm (which thus also include trigonometric functions and inverse trigonometric functions).<ref>Wikipedia (Closed-form expression)</ref>}} of the relationship of the <math>x</math> and <math>y</math> coordinates of <math>P</math> like that of the circle. But after this section, you will see that there is a closed form, at least theoretically, but it is not "nice" at all. | + | {{!}}We see that <math>P</math> moves in a stretched figure 8 and will tend to think that there should be a nice {{EasyBalloon|Link=closed form|Balloon=In mathematics, an expression is said to be a closed-form expression if, and only if, it can be expressed analytically in terms of a bounded number of certain "well-known" functions. Typically, these well-known functions are defined to be elementary functions – constants, one variable x, elementary operations of arithmetic (+ – × ÷), nth roots, exponent and logarithm (which thus also include trigonometric functions and inverse trigonometric functions).<ref>Wikipedia (Closed-form expression)</ref>}} of the relationship of the <math>x</math> and <math>y</math> coordinates of <math>P</math> like that of the circle. After this section, you will see that there is a closed form, at least theoretically, but it is not "nice" at all. |
| | {{!}}- | | {{!}}- |
| | {{!}}[[Image:JW point P.png|center|600px]] | | {{!}}[[Image:JW point P.png|center|600px]] |
| Line 76: | | Line 76: | |
| | {{!}}align="center"{{!}}'''Image 8''' | | {{!}}align="center"{{!}}'''Image 8''' |
| | {{!}}- | | {{!}}- |
| - | {{!}}{{HideShowThis|ShowMessage=Show derivation of the relationship of the <math>x</math> and <math>y</math> coordinates of <math>P</math>.|HideMessage=Hide|HiddenText=We know coordinates <math>A</math> and <math>D</math>. Hence let the coordinates of <math>A</math> be <math>(0,0)</math>, coordinates of <math>B</math> be <math>(c,d)</math>. We also know the length of the bar. Let <math>AB=CD=r, BC=m</math>. | + | {{!}}{{HideShowThis|ShowMessage=Show derivation of the relationship of the <math>x</math> and <math>y</math> coordinates of <math>P</math>.|HideMessage=Hide|HiddenText=We know coordinates <math>A</math> and <math>D</math> because they are fixed. Hence suppose the coordinates of <math>A</math> are <math>(0,0)</math> and coordinates of <math>B</math> are <math>(c,d)</math>. We also know the length of the bar. Let <math>AB=CD=r, BC=m</math>. |
| | | | |
| | Suppose that at one instance we know the coordinates of <math>B</math> as <math>(a,b)</math>, then <math>C</math> will be on the circle centered at <math>B</math> with a radius of <math>m</math>. Since <math>C</math> is on the circle centered at <math>D</math> with radius <math>r</math>. Then the coordinates of <math>C</math> have to satisfy the two equations below. | | Suppose that at one instance we know the coordinates of <math>B</math> as <math>(a,b)</math>, then <math>C</math> will be on the circle centered at <math>B</math> with a radius of <math>m</math>. Since <math>C</math> is on the circle centered at <math>D</math> with radius <math>r</math>. Then the coordinates of <math>C</math> have to satisfy the two equations below. |
| Line 121: | | Line 121: | |
| | {{!}}Hence <math>y=\frac {-2a+2c}{2b-2d}x-\frac {m^2-2r^2+c^2+d^2}{2b-2d} \cdots \cdots </math>{{!}}{{!}}{{EquationRef2|Eq. 4}} | | {{!}}Hence <math>y=\frac {-2a+2c}{2b-2d}x-\frac {m^2-2r^2+c^2+d^2}{2b-2d} \cdots \cdots </math>{{!}}{{!}}{{EquationRef2|Eq. 4}} |
| | {{!}}} | | {{!}}} |
| - | Now, we could manipulate {{EquationNote|Eq. 3}} to get an expression for <math>b</math>, i.e. <math>b=f(a,c,d,m,r,x,y)</math>. Next, we substitute <math>b=f(a,c,d,m,r,x,y)</math> back into {{EquationNote|Eq. 1}} and will be able to obtain an expression for <math>a</math>, i.e. <math>a=g(x,y,d,c,m,r)</math>. Since <math>b=\pm \sqrt {r^2-a^2}</math>, we have expressions of <math>a</math> and <math>b</math> in terms of <math>x,y,d,c,m</math> and <math>r</math>. | + | Now, we can manipulate {{EquationNote|Eq. 3}} to get an expression for <math>b</math>, i.e. <math>b=f(a,c,d,m,r,x,y)</math>. Next, we substitute <math>b=f(a,c,d,m,r,x,y)</math> back into {{EquationNote|Eq. 1}} and will be able to obtain an expression for <math>a</math>, i.e. <math>a=g(x,y,d,c,m,r)</math>. Since <math>b=\pm \sqrt {r^2-a^2}</math>, we have expressions of <math>a</math> and <math>b</math> in terms of <math>x,y,d,c,m</math> and <math>r</math>. |
| | | | |
| | Say point <math>P</math> has coordinates <math>(x',y')</math>, then <math>x'=\frac {a+x}{2}</math> and <math>y'=\frac {b+y}{2}</math> which will yield | | Say point <math>P</math> has coordinates <math>(x',y')</math>, then <math>x'=\frac {a+x}{2}</math> and <math>y'=\frac {b+y}{2}</math> which will yield |
| Line 138: | | Line 138: | |
| | ==='''Parametric Description'''=== | | ==='''Parametric Description'''=== |
| | {{{!}} | | {{{!}} |
| - | {{!}}Alright, since the algebraic equations are not agreeable at all, we have to resort to the parametric description. To think about, it would not be too bad if we could describe the motion of <math>P</math> using the angle of ratation. As a matter of fact, it is easier to obtain the angle of rotation than knowing one of <math>P</math>'s coordinates. | + | {{!}}Alright, since the algebraic equations are not agreeable at all, we have to resort to the parametric description. To think about, it may be more manageable to describe the motion of <math>P</math> using the angle of ratation. As a matter of fact, it is easier to obtain the angle of rotation than knowing one of <math>P</math>'s coordinates. |
| | {{!}}- | | {{!}}- |
| | {{!}}[[Image:ParaP.png|center|600px]] | | {{!}}[[Image:ParaP.png|center|600px]] |
| Line 183: | | Line 183: | |
| | | | |
| | {{{!}} | | {{{!}} |
| | {{!}}- | | {{!}}- |
| | {{!}}[[Image:Watt1.png|center|border|600px]]{{!}}{{!}}[[Image:PointF.png|center|550px]] | | {{!}}[[Image:Watt1.png|center|border|600px]]{{!}}{{!}}[[Image:PointF.png|center|550px]] |
| Line 204: | | Line 204: | |
| | {{!}}align="center"{{!}}[[Image:Peaucellier linkage animation.gif|center|border]]'''Image 13''' <ref>Wikipedia (Peaucellier–Lipkin linkage)</ref>{{!}}{{!}}Take a minute to ponder the question: "How do you produce a straight line?" We all know, or rather assume, that light travels in straight line. But does it always do that? Einstein's theory of relativity has shown (and been verified) that light is bent by gravity and therefore, our assumption that light travels in straight lines does not hold all the time. Another simpler method is just to fold a piece of paper and the crease will be a straight line. | | {{!}}align="center"{{!}}[[Image:Peaucellier linkage animation.gif|center|border]]'''Image 13''' <ref>Wikipedia (Peaucellier–Lipkin linkage)</ref>{{!}}{{!}}Take a minute to ponder the question: "How do you produce a straight line?" We all know, or rather assume, that light travels in straight line. But does it always do that? Einstein's theory of relativity has shown (and been verified) that light is bent by gravity and therefore, our assumption that light travels in straight lines does not hold all the time. Another simpler method is just to fold a piece of paper and the crease will be a straight line. |
| | | | |
| - | Anyway, mathematicians and engineers had being searching for almost a century to find the solution to the straight line linkage but all had failed until 1864 when a French army officer Charles Nicolas Peaucellier came up with his ''inversor linkage''. Interestingly, he did not publish his findings and proof until 1873, when Lipmann I. Lipkin, a student from University of St. Petersburg, demonstrated the same working model at the World Exhibition in Vienna. Peaucellier acknowledged Lipkin's independent findings with the publication of the details of his discovery in 1864 and the mathematical proof. '''Taimina''' | + | Anyway, mathematicians and engineers had being searching for almost a century to find the solution to a straight line linkage but all had failed until 1864 when a French army officer Charles Nicolas Peaucellier came up with his ''inversor linkage''. Interestingly, he did not publish his findings and proof until 1873, when Lipmann I. Lipkin, a student from University of St. Petersburg, demonstrated the same working model at the World Exhibition in Vienna. Peaucellier acknowledged Lipkin's independent findings with the publication of the details of his discovery in 1864 and the mathematical proof. '''Taimina''' |
| | | | |
| | Now, the linkage that produces a straight line motion is much more complicated than folding a piece of paper but the Peaucellier-Lipkin Linkage is amazingly simple as shown in '''Image 13'''. In the next section, a proof of how this linkage draws a straight line is provided. | | Now, the linkage that produces a straight line motion is much more complicated than folding a piece of paper but the Peaucellier-Lipkin Linkage is amazingly simple as shown in '''Image 13'''. In the next section, a proof of how this linkage draws a straight line is provided. |
| Line 259: | | Line 259: | |
| | {{!}}align="center"{{!}}'''Image 16''' | | {{!}}align="center"{{!}}'''Image 16''' |
| | {{!}}- | | {{!}}- |
| - | {{!}}The new linkage caused considerable excitement in London. Mr. Prim, "engineer to the House", utilized the new compact form invented by H.Hart to fit his new blowing engine which proved to be "exceptionally quiet in their operation." In this compact form, <math>DA=DC</math>, <math>AF=CF</math> and <math>AB = BC</math>. Point <math>E</math> and <math>F</math> are fixed pivots. In '''Image 16'''. F is the inversive center and points <math>D</math>,<math>F</math> and <math>B</math> are collinear and <math>DF \cdot DB</math> is of constant value. Mr. Prim's blowing engine used for ventilating the House of Commons, 1877. The crosshead of the reciprocating air pump is guided by a Peaucellier linkage shown in '''Image 17'''. The slate-lined air cylinders had rubber-flap inlet and exhaust valves and a piston whose periphery was formed by two rows of brush bristles. Prim's machine was driven by a steam engine.<ref>Ferguson, 1962, p. 205</ref> | + | {{!}}The new linkage caused considerable excitement in London. Mr. Prim, "engineer to the House", utilized the new compact form invented by H.Hart to fit his new blowing engine which proved to be "exceptionally quiet in their operation." In this compact form, <math>DA=DC</math>, <math>AF=CF</math> and <math>AB = BC</math>. Point <math>E</math> and <math>F</math> are fixed pivots. In '''Image 16'''. F is the inversive center and points <math>D</math>,<math>F</math> and <math>B</math> are collinear and <math>DF \cdot DB</math> is of constant value. |
| | {{!}}- | | {{!}}- |
| | {{!}}[[Image:Blowing engine.jpg|center|border|600px]] | | {{!}}[[Image:Blowing engine.jpg|center|border|600px]] |
| | {{!}}- | | {{!}}- |
| - | {{!}}align="center"{{!}}'''Image 17''' <ref>Ferguson, 1962, p. 205</ref> | + | {{!}}align="center"{{!}}'''Image 17''' Mr. Prim's blowing engine used for ventilating the House of Commons, 1877. The crosshead of the reciprocating air pump is guided by a Peaucellier linkage shown in '''Image 17'''. The slate-lined air cylinders had rubber-flap inlet and exhaust valves and a piston whose periphery was formed by two rows of brush bristles. Prim's machine was driven by a steam engine.<ref>Ferguson, 1962, p. 205</ref> |
| | {{!}}} | | {{!}}} |
| | | | |
| Line 269: | | Line 269: | |
| | {{{!}} | | {{{!}} |
| - | {{!}}After Peaucellier-Lipkin Linkage was introduced to England in 1874, Mr. Hart of Woolwich Academy <ref>Kempe, 1877, p. 18</ref> devised a new linkage that contained only four links which is the blue part as shown in '''Image 18'''. The next part will prove that point <math>O</math> is the inversion center with <math>OP</math> and <math>OQ</math> collinear and <math>OP \cdot OQ =</math> constant. When point <math>P</math> is constrained to move in a circle that passes through point <math>O</math>, then point <math>Q</math> will trace out a straight line. See below for proof. | + | {{!}}After the Peaucellier-Lipkin Linkage was introduced to England in 1874, Mr. Hart of Woolwich Academy <ref>Kempe, 1877, p. 18</ref> devised a new linkage that contained only four links which is the blue part as shown in '''Image 18'''. The next part will prove that point <math>O</math> is the inversion center with <math>OP</math> and <math>OQ</math> collinear and <math>OP \cdot OQ =</math> constant. When point <math>P</math> is constrained to move in a circle that passes through point <math>O</math>, then point <math>Q</math> will trace out a straight line. See below for proof. |
| | {{!}}- | | {{!}}- |
## Revision as of 11:27, 15 July 2010
How to draw a straight line without a straight edge
The image shows the first planar linkageIt is defined as a series of rigid links connected with joints to form a closed chain, or a series of closed chains. Each link has two or more joints, and the joints have various degrees of freedom to allow motion between the links. that drew a straight line without using a straight edge. Independently invented by a French army officer, Charles-Nicolas Peaucellier and a Lithuanian (who some argue was actually Russian) mathematician Lipmann Lipkin, it had important applications in engineering and mathematics.[2][3][4]
How to draw a straight line without a straight edge
Field: Geometry
Created By: Cornell University Libraries and the Cornell College of Engineering
# Introduction
What is a straight line? How do you define straightness? The questions seem silly to ask because they are so intuitive. We come to accept that straightness is simply straightness and its definition, like that of point and line, is simply assumed. However, why do we not assume the definition of circle? When using a compass to draw a circle, we are not starting with a figure that we accept as circular; instead, we are using a fundamental property of circles, that the points on a circle are at a fixed distance from the center. This page explores the properties of straight lines and more importantly and interestingly, the answer to the question "how do you construct something straight without a straight edge?"
# What Is A Straight Line?--- A Question Rarely Asked.
Today, we simply define a line as a one-dimensional object that extents to infinity in both directions and it is straight, i.e. no wiggles along its length. But what is straightness? It is a hard question because we can picture it, but we simply cannot articulate it. In Euclid's book Elements, he defined a straight line as "lying evenly between its extreme points" and as having "breadthless width." This definition is pretty useless. What does he mean by "lying evenly"? It tells us nothing about how to describe or construct a straight line. So what is a straightness anyway? There are a few good answers. For instance, in the Cartesian CoordinatesA Cartesian coordinate system specifies each point uniquely in a plane by a pair of numerical coordinates, which are the signed distances from the point to two fixed perpendicular directed lines, measured in the same unit of length., the graph of $y=ax+b$ is a straight line as shown in Image 1. In addition, the shortest distance between two points on a flat plane is a straight line, a definition we are most familiar with. However, it is important to realize that the definitions of being "shortest" and "straight" will change when you are no longer on flat plane. For example, the shortest distance between two points on a sphere is the the "great circle" as shown in Image 2, a section of a sphere that contains a diameter of the sphere, and great circle is straight on the spherical surface. Since we are dealing with plane geometry here, we define straight line as the curve of $y=ax+b$ in Cartesian Coordinates. For more properties on straight line, you can refer to the book Experiencing Geometry by David W. Henderson.
Image 1 Image 2 [6]
# The Quest to Draw a Straight Line
## The Practical Need
Now having defined what a straight line is, we must figure out a way to construct it on a plane. However, the challenge is to do that without using anything that we assume to be straight such as a straight edge (or ruler) just like how we construct a circle using a compass. Historically, it has been of great interest to mathematicians and engineers not only because it is an interesting question to ponder about, but also because it has important applications in engineering. Since the invention of various steam engines and machines that are powered by them, engineers have been trying to perfect the mechanical linkage to convert all kinds of motions (especially circular motion) to linear motions.
Image 3[7]
Image 3 shows a patent drawing of an early steam engine. It is of the simplest form with a boiler (on the left), a cylinder with piston (above the boiler), a beam (on top) and a pump (on the right side) at the other end. The pump was usually used to extract water from the mines.
[Click here to show how this engine works.] When the piston is at its lowest position, steam is let into the cylinder from valve K to push the piston upwards. Afterward, when the piston is at it [...] [Click here to hide text] When the piston is at its lowest position, steam is let into the cylinder from valve K to push the piston upwards. Afterward, when the piston is at its highest position, cold water is let in from valve E, cooling the steam in the cylinder and causing the pressure in the the cylinder to drop below the atmospheric pressure. The difference in pressure causes the piston to move downwards. After the piston returns to the lowest position, the whole process is repeated. This kind of steam engine is called "atmospheric" because it utilizes atmospheric pressure to cause the downward action of the piston. Since in the downward motion, the piston pulls on the beam and in the upward motion, the beam pulls on the piston, the connection between the end of the piston rod and the beam is always in tension (it is being stretched by forces at two ends) and that is why a chain is used as the connection.[8] [9]
Ideally, the piston moves in the vertical direction and the piston rod takes only axial loading, i.e. forces applied in the direction along the rod. However, from the above picture, it is clear that the end of the piston does not move in a straight line due to the fact that the end of the beam describes an arc of a circle. As a result, horizontal forces are created and subjected onto the piston rod. Consequently, the rate of attrition is very much expedited and the efficiency of the engine is greatly compromised. Durability is important in the design of any machine, but it was especially essential for the early steam engines. For these machines were meant to run 24/7 to make profits for the investors. Therefore, such defect in the engine posed a great need for improvements.[10]
Improvements were made. Firstly, "double-action" engines were built, part of which is shown in Image 4. Secondly, the beam was dispensed and replaced by a gear as shown in Image 5. However, both of these improvements were unsatisfactory and the need for a straight line linkage was still imperative.
Image 4[11] Image 5 [12]
[Why those engines were unsatisfactory?] In Image 4, atmospheric pressure acts in both upward and downward strokes of the engine and two chains were used (one connected to the top of th [...] [Hide] In Image 4, atmospheric pressure acts in both upward and downward strokes of the engine and two chains were used (one connected to the top of the arched end of the beam and one to the bottom), both of which will took turns being taut throughout one cycle. One might ask why chain was used all the time. The answer was simple: to fit the curved end of the beam. However, this does not fundamentally solve the straight line problem and unfortunately created more. The additional chain increased the height of the engine and made the manufacturing very difficult (it was hard to make straight steel bars and rods back then) and costly. In Image 5, after the beam was replaced by gear actions, the piston rod was fitted with teeth (labeled k) to drive the gear. Theoretically, this solves the problem fundamentally. The piston rod is confined between the guiding wheel at K and the gear, and it moves only in the up-and-down motion. However, the practical problem remained unsolved. The friction and the noise between all the guideways and the wheels could not be ignored, not to mention the increased possibility of failure and cost of maintenance due to additional parts.[13]
## James Watt's breakthrough
James Watt found a mechanism that converted the linear motion of pistons in the cylinder to the semi circular motion (that is moving in an arc of the circle) of the beam (or the circular motion of the flywheel) and vice versa. In this way, energy in the vertical direction is converted to rotational energy of the flywheel from where is it converted to useful work that the engine is desired to do. In 1784, he invented a three member linkage that solved the linear-motion-to-circular problem practically as illustrated by the animation below. In its simplest form, there are two radius arms that have the same lengths and a connecting arm with midpoint P. Point P moves in a straight line while the two hinges move in circular arcs. However, this linkage only produced approximate straight line (a stretched figure 8 actually) as shown in Image 7, much to the chagrin of the mathematicians who were after absolute straight lines. There is a more general form of the Watt's linkage that the two radius arms having different lengths like shown in Image 6. To make sure that Point P still move in the stretched figure 8, it has to be positioned such that it adheres to the ratio$\frac{AB}{CD} = \frac{CP}{CB}$.[14]
Image 6 [15] Image 7 [16]
## The Motion of Point P
We intend to describe the path of $P$ so that we can show it does not move in a straight line (which is obvious in the animation). More importantly, this will allow us to pinpoint the position of $P$ using certain parameters we know, such as the angle of rotation or one coordinate of point $P$. This is awfully crucial in engineering as engineers would like to know that there are no two parts of the machine will collide with each other throughout the motion. In addition, you can use the parametrization to create your own animation like that in Image 7.
### Algebraic Description
We see that $P$ moves in a stretched figure 8 and will tend to think that there should be a nice closed formIn mathematics, an expression is said to be a closed-form expression if, and only if, it can be expressed analytically in terms of a bounded number of certain "well-known" functions. Typically, these well-known functions are defined to be elementary functions – constants, one variable x, elementary operations of arithmetic (+ – × ÷), nth roots, exponent and logarithm (which thus also include trigonometric functions and inverse trigonometric functions). of the relationship of the $x$ and $y$ coordinates of $P$ like that of the circle. After this section, you will see that there is a closed form, at least theoretically, but it is not "nice" at all.
Image 8
[Show derivation of the relationship of the $x$ and $y$ coordinates of $P$.]
We know coordinates $A$ and $D$ because they are fixed. Hence suppose the [...]
[Hide]
We know coordinates $A$ and $D$ because they are fixed. Hence suppose the coordinates of $A$ are $(0,0)$ and coordinates of $B$ are $(c,d)$. We also know the length of the bar. Let $AB=CD=r, BC=m$.
Suppose that at one instance we know the coordinates of $B$ as $(a,b)$, then $C$ will be on the circle centered at $B$ with a radius of $m$. Since $C$ is on the circle centered at $D$ with radius $r$. Then the coordinates of $C$ have to satisfy the two equations below.
$\begin{cases} (x-a)^2+(y-b)^2=m^2 \\ (x-c)^2+(y-d)^2=r^2 \end{cases}$
Now, since we know that $B$ is on the circle centered at $A$ with radius $r$, the coordinates of $B$ have to satisfy the equation $a^2+b^2=r^2$.
Therefore, the coordinates of $C$ have to satisfy the three equations below.
$\begin{cases} (x-a)^2+(y-b)^2=m^2 \\ (x-c)^2+(y-d)^2=r^2 \\ a^2+b^2=r^2 \end{cases}$
Now, expanding the first two equations we have,
$\begin{cases} x^2+y^2-2ax-2by+a^2+b^2=m^2 \cdots \cdots \\ x^2+y^2-2cx-2dy+c^2+d^2=r^2 \cdots \cdots \\ \end{cases}$
Subtract Eq. 2 from Eq. 1 we have,
$(-2a+2c)x-(2b-2d)y+(a^2+b^2)-(c^2+d^2)=m^2-r^2\cdots \cdots$
Substituting $a^2+b^2=r^2$ and rearranging we have,
$(-2a+2c)x-(2b-2d)y=m^2-2r^2+c^2+d^2$
Hence $y=\frac {-2a+2c}{2b-2d}x-\frac {m^2-2r^2+c^2+d^2}{2b-2d} \cdots \cdots$
Now, we can manipulate Eq. 3 to get an expression for $b$, i.e. $b=f(a,c,d,m,r,x,y)$. Next, we substitute $b=f(a,c,d,m,r,x,y)$ back into Eq. 1 and will be able to obtain an expression for $a$, i.e. $a=g(x,y,d,c,m,r)$. Since $b=\pm \sqrt {r^2-a^2}$, we have expressions of $a$ and $b$ in terms of $x,y,d,c,m$ and $r$.
Say point $P$ has coordinates $(x',y')$, then $x'=\frac {a+x}{2}$ and $y'=\frac {b+y}{2}$ which will yield
$\begin{cases} x=2x'-a \cdots \cdots \\ y=2y'-b \cdots \cdots \end{cases}$
In the last step we substitute $a=g(x,y,d,c,m,r)$,$b=\pm \sqrt {r^2-a^2}$, Eq. 5 and Eq. 6 back into Eq. 4 and we will finally have a relationship between $x'$ and $y'$. Of course, it will be a messy closed form but we could definitely use Mathematica to do the maths. The point is, there is no nice algebraic form for that figure 8, though it has closed form and that is why we have to find something else.
### Parametric Description
Alright, since the algebraic equations are not agreeable at all, we have to resort to the parametric description. To think about, it may be more manageable to describe the motion of $P$ using the angle of ratation. As a matter of fact, it is easier to obtain the angle of rotation than knowing one of $P$'s coordinates.
Image 9
[Show parameterization of $P$.] We will parametrize the $P$ with the angle $\theta$ in conformation of most par [...] [Hide] We will parametrize the $P$ with the angle $\theta$ in conformation of most parametrizations of point. $\begin{cases} \overrightarrow {AB} = (r \sin \theta, r \cos \theta) \\ \overrightarrow {BC} = (m \sin (\frac {\pi}{2} + \beta + \alpha), m \cos (\frac {\pi}{2} + \beta + \alpha)) \\ \end{cases}$ Now let $BD=l$. Then using cosine formula, we have $m^2+l^2-2ml\cos \alpha = r^2$ As a result, we can express $\alpha$ and $\beta$ as $\alpha = \cos^{-1} \frac {m^2+l^2-r^2}{2ml}$ Since $l = \sqrt{(c-r \sin \theta)^2+(d-r \cos \theta)^2}$, $c$ and $d$ being the coordinates of point $D$, we can find $\alpha$ in terms of $\theta$. Furthermore, $\begin{align} \overrightarrow {BD} & = \overrightarrow {AD}-\overrightarrow {AB} \\ & = (c,d) - (r\sin \theta, r \cos \theta) \\ & = (c - r\sin \theta, d - r \cos \theta) \end{align}$ Therefore, $\beta = \tan^{-1}\frac {d-r \cos \theta}{c - r \sin \theta}$ Hence, $\begin{align} \overrightarrow {AP} & = \overrightarrow {AB} + \frac {1}{2} \overrightarrow {BC} \\ & = (r \sin \theta, r \cos \theta) + \frac {m}{2}(\sin (\frac {\pi}{2} + \alpha + \beta), \cos (\frac {\pi}{2} + \alpha + \beta)) \\ \end{align}$ Now, $\overrightarrow {AP}$ is parametrized in term of $\theta, c, d, r$ and $m$.
Image 10 [18]
Another reason we parameterized $P$ is that Watt did not simply used that three bar linkage shown in Image 6 and Image 7. Instead he used something totally different. To understand that, our knowledge of the parameterizaion of $P$ is crucial. Imitations were a big problems back in those days. When filing for a patent, James Watt and other inventors had to explain how their devices worked without revealing the critical secrets so that others could easily copy them. As shown in Image 10, the original patent illustration, Watt illustrated his simple linkage on a separate diagram but we couldn't find it in anywhere in the illustration. That is Watt's secret. What he had actually used on his engine was the modified version of the basic linkage as show in Image 11. The link $ABCD$ is the original three member linkage with $AB=CD$ and point $P$ being the midpoint of $BC$. A is the pivot of the beam fixed on the engine frame while D is also fixed. However, Watt modified it by adding a parallelogram $BCFE$ to it and connecting point $F$ to the piston rod. We now know that point $P$ moves in quasi straight line as shown previously. It is important for two points to move in straight lines now is because one has to be connected to the piston rod that drives the beam, another has to convert the circular motion to linear motion so as to drive the valve gears that control the opening and closing of the valves. It turns out that point F moves in a similar quasi straight line as point P. This is the truly famous James Watt's "parallel motion" linkage.
Image 11 Image 12
[How would we find the parametric equation for point $F$ then?] ```Well, it is easy enough. Refer to Image 12
``` $\overrightarrow {AB} = (r \sin \theta, r \cos \theta) \therefore \overrightarrow {AE} = \frac {e+f}{r}(r \sin \theta, r \cos \theta)$ Furthermore $\overrightarrow {AF} = \overrightarrow {AE} + \overrightarrow {BC}$ Th [...] [Hide] Well, it is easy enough. Refer to Image 12 $\overrightarrow {AB} = (r \sin \theta, r \cos \theta) \therefore \overrightarrow {AE} = \frac {e+f}{r}(r \sin \theta, r \cos \theta)$ Furthermore $\overrightarrow {AF} = \overrightarrow {AE} + \overrightarrow {BC}$ Therefore, $\overrightarrow {AF} = \frac {e+r}{r}(r \sin \theta, r \cos \theta) + (m \sin (\frac {\pi}{2} + \beta + \alpha), m \cos (\frac {\pi}{2} + \beta + \alpha))$
## The First Planar Straight Line Linkage - Peaucellier-Lipkin Linkage
Image 13 [19] Take a minute to ponder the question: "How do you produce a straight line?" We all know, or rather assume, that light travels in straight line. But does it always do that? Einstein's theory of relativity has shown (and been verified) that light is bent by gravity and therefore, our assumption that light travels in straight lines does not hold all the time. Another simpler method is just to fold a piece of paper and the crease will be a straight line. Anyway, mathematicians and engineers had being searching for almost a century to find the solution to a straight line linkage but all had failed until 1864 when a French army officer Charles Nicolas Peaucellier came up with his inversor linkage. Interestingly, he did not publish his findings and proof until 1873, when Lipmann I. Lipkin, a student from University of St. Petersburg, demonstrated the same working model at the World Exhibition in Vienna. Peaucellier acknowledged Lipkin's independent findings with the publication of the details of his discovery in 1864 and the mathematical proof. Taimina Now, the linkage that produces a straight line motion is much more complicated than folding a piece of paper but the Peaucellier-Lipkin Linkage is amazingly simple as shown in Image 13. In the next section, a proof of how this linkage draws a straight line is provided.
Image 14
Let's turn to a skeleton drawing of the Peaucellier-Lipkin linkage in Image 14. It is constructed in such a way that $OA = OB$ and $AC=CB=BP=PA$. Furthermore, all the bars are free to rotate at every joint and point $O$ is a fixed pivot. Due to the symmetrical construction of the linkage, it goes without proof that points $O$,$C$ and $P$ lie in a straight line. Construct lines $OCP$ and $AB$ and they meet at point $M$. Since shape $APBC$ is a rhombus $AB \perp CP$ and $CM = MP$ Now, $(OA)^2 = (OM)^2 + (AM)^2$ $(AP)^2 = (PM)^2 + (AM)^2$ Therefore, $\begin{align} (OA)^2 - (AP)^2 & = (OM)^2 - (PM)^2\\ & = (OM-PM)\cdot(OM + PM)\\ & = OC \cdot OP\\ \end{align}$ Let's take a moment to look at the relation $(OA)^2 - (AP)^2 = OC \cdot OP$. Since the length $OA$ and $AP$ are of constant length, then the product $OC \cdot OP$ is of constant value however you change the shape of this construction.
Image 15
Refer to Image 15. Let's fix the path of point $C$ such that it traces out a circle that has point $O$ on it. $QC$ is the extra link pivoted to the fixed point $Q$ with $QC=QO$. Construct line $OQ$ that cuts the circle at point $R$. In addition, construct line $PN$ such that $PN \perp OR$. Since, $\angle OCR = 90^\circ$ We have $\vartriangle OCR \sim \vartriangle ONP and \frac{OC}{OR} = \frac{ON}{OP}$. Moreover $OC \cdot OP = ON \cdot OR$. Therefore $ON = \frac {OC \cdot OP}{OR} =$constant, i.e. the length of $ON$(or the x-coordinate of $P$ w.r.t $O$) does not change as points $C$ and $P$ move. Hence, point $P$ moves in a straight line. ∎[20]
## Inversive Geometry in Peaucellier-Lipkin Linkage
As a matter of fact, the first part of the proof given above is already sufficient. Due to inversive geometry, once we have shown that points $O$,$C$ and $P$ are collinear and that $OC \cdot OP$ is of constant value. Points $C$ and $P$ are inversive pairs with $O$ as inversive center. Therefore, once $C$ moves in a circle that contains $O$, then $P$ will move in a straight line and vice versa. ∎ See Inversion for more detail.
## Peaucellier-Lipkin Linkage in Action
Image 16
The new linkage caused considerable excitement in London. Mr. Prim, "engineer to the House", utilized the new compact form invented by H.Hart to fit his new blowing engine which proved to be "exceptionally quiet in their operation." In this compact form, $DA=DC$, $AF=CF$ and $AB = BC$. Point $E$ and $F$ are fixed pivots. In Image 16. F is the inversive center and points $D$,$F$ and $B$ are collinear and $DF \cdot DB$ is of constant value.
Image 17 Mr. Prim's blowing engine used for ventilating the House of Commons, 1877. The crosshead of the reciprocating air pump is guided by a Peaucellier linkage shown in Image 17. The slate-lined air cylinders had rubber-flap inlet and exhaust valves and a piston whose periphery was formed by two rows of brush bristles. Prim's machine was driven by a steam engine.[21]
## Hart's Linkage
After the Peaucellier-Lipkin Linkage was introduced to England in 1874, Mr. Hart of Woolwich Academy [22] devised a new linkage that contained only four links which is the blue part as shown in Image 18. The next part will prove that point $O$ is the inversion center with $OP$ and $OQ$ collinear and $OP \cdot OQ =$ constant. When point $P$ is constrained to move in a circle that passes through point $O$, then point $Q$ will trace out a straight line. See below for proof.
Image 18
We know that $AB = CD, BC = AD$ As a result, $BD \parallel AC$ Draw line $OQ \parallel AC$, intersecting $AD$ at point $P$. Consequently, points $O,P,Q$ are collinear Construct rectangle $EFCA$ $\begin{align} AC \cdot BD & = EF \cdot BD \\ & = (ED + EB) \cdot (ED - EB) \\ & = (ED)^2 - (EB)^2 \\ \end{align}$ For $\begin{array}{lcl} (ED)^2 + (AE)^2 & = & (AD)^2 \\ (EB)^2 + (AE)^2 & = & (AB)^2 \end{array}$, we then have $AC \cdot BD = (ED)^2 - (EB)^2 = (AD)^2 - (AB)^2$. Further, let's define $\frac{OP}{BD} = m, hence \frac{OQ}{AC} = 1-m$ where $0<m<1$ We finally have $\begin{align} OP \cdot OQ & = m(1-m)BD \cdot AC\\ & = m(1-m)((AD)^2 - (AB)^2) \end{align}$which is what we wanted to prove.
## Other Straight Line Mechanism
| | | |
|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|---------------|
| | | |
| Image 19 | Image 20 | Image 21 [23] |
| There are many other mechanisms that create straight line. I will only introduce one of them here. Refer to Image 19. Consider two circles $C_1$ and $C_2$ with radius having the relation $2r_2=r_1$. We roll $C_2$ inside $C_1$ without slipping as show in Image 20. Then the arch lengths $r_1\beta = r_2\alpha$. Voila! $\alpha = 2\beta$ and point $C$ has to be on the line joining the original points $P$ and $Q$! The same argument goes for point $P$. As a result, point $C$ moves in the horizontal line and point $P$ moves in the vertical line. In 1801, James White patented his mechanism using this rolling motion. It is shown in Image 21 [24]. | | |
| | | |
| Image 22 | | |
| Interestingly, if you attach a rod of fixed length to point $C$ and $P$ and the end of the rod $T$ will trace out an ellipse as seen in Image 22. Why? Consider the coordinates of $P$ in terms of $\theta$, $PT$ and $CT$. Point $T$ will have the coordinates $(CT \cos \theta, PT \sin \theta)$. Now, whenever we see $\cos \theta$ and $\sin \theta$ together, we want to square them. Hence, $x^2=CT^2 \cos^2 \theta$ and $y^2=PT^2 \sin^2 \theta$. Well, they are not so pretty yet. So we make them pretty by dividing $x^2$ by $CT^2$ and $y^2$ by $PT^2$, obtaining $\frac {x^2}{CT^2} = \cos^2 \theta$ and $\frac {y^2}{PT^2} = \sin^2 \theta$. Voila again! $\frac {x^2}{CT^2} + \frac {y^2}{PT^2}=1$ and this is exactly the algebraic formula for an ellipse. [25] | | |
# Conclusion---The Take Home Message
We should not take the conception of straight line for granted and there are many interesting, and important, issues surrounding the concepts of straight line. A serious exploration of its properties and constructions will not only give you a glimpse of geometry's all encompassing reach into science, engineering and our lives, but also make you question many of the assumptions you have about geometry. Hopefully, you will start questioning the flatness of a plane, roundness of a circle and the nature of a point. This was how real science and amazing discoveries were made and this is how you should learn and appreciate them.
# Teaching Materials
There are currently no teaching materials for this page. Add teaching materials.
# About the Creator of this Image
KMODDL is a collection of mechanical models and related resources for teaching the principles of kinematics--the geometry of pure motion. The core of KMODDL is the Reuleaux Collection of Mechanisms and Machines, an important collection of 19th-century machine elements held by Cornell's Sibley School of Mechanical and Aerospace Engineering.
# Notes
2. ↑ Bryant, & Sangwin, 2008, p. 34
3. ↑ Kempe, 1877, p. 12
4. ↑ Taimina
5. ↑ Wikipedia (Cartesian coordinate system)
6. ↑ Weisstein
7. ↑ Bryant, & Sangwin, 2008, p. 18
8. ↑ Bryant, & Sangwin, 2008, p. 18
9. ↑ Wikipedia (Steam Engine)
10. ↑ Bryant, & Sangwin, 2008, p. 18-21
11. ↑ Bryant, & Sangwin, 2008, p. 18-21
12. ↑ Bryant, & Sangwin, 2008, p. 18-21
13. ↑ Bryant, & Sangwin, 2008, p. 18-21
14. ↑ Bryant, & Sangwin, 2008, p. 24
15. ↑ Bryant, & Sangwin, 2008, p. 23
17. ↑ Wikipedia (Closed-form expression)
18. ↑ Lienhard, 1999, February 18
20. ↑ Bryant, & Sangwin, 2008, p. 33-36
21. ↑ Ferguson, 1962, p. 205
22. ↑ Kempe, 1877, p. 18
23. ↑ Bryant, & Sangwin, 2008, p.44
24. ↑ Bryant, & Sangwin, 2008, p.42-44
25. ↑ Cundy, & Rollett, 1961, p. 240
# References
1. Bryant, John, & Sangwin, Christopher. (2008). How Round is your circle?. Princeton & Oxford: Princeton Univ Pr.
2. Cundy, H.Martyn, & Rollett, A.P. (1961). Mathematical models. Clarendon, Oxford : Oxford University Press.
3. Henderson, David. (2001). Experiencing geometry. Upper Saddle River, New Jersey: Prentice hall.
4. Kempe, A. B. (1877). How to Draw a straight line; a lecture on linkage. London: Macmillan and Co..
5. Taimina, D. (n.d.). How to Draw a Straight Line. Retrieved from The Kinematic Models for Design Digital Library: http://kmoddl.library.cornell.edu/tutorials/04/
6. Ferguson, Eugene S. (1962). Kinematics of mechanisms from the time of watt. United States National Museum Bulletin, (228), 185-230.
7. Weisstein, Eric W. Great Circle. Retrieved from MathWorld--A Wolfram Web Resource: http://mathworld.wolfram.com/GreatCircle.html
10. Wikipedia (Cartesian coordinate system). (n.d.). Cartesian coordinate system. Retrieved from Wikipedia: http://en.wikipedia.org/wiki/Cartesian_coordinate_system
13. Lienhard, J. H. (1999, February 18). "I SELL HERE, SIR, WHAT ALL THE WORLD DESIRES TO HAVE -- POWER". Retrieved from The Engines of Our Ingenuity: http://www.uh.edu/engines/powersir.htm
# Future Directions for this Page
I need to change the size of the main picture and maybe some more theoretical description what a straight line here.
Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.
Today, we simply define a line as a one-dimensional object that extents to infinity in both directions and it is straight, i.e. no wiggles along its length. But what is straightness? It is a hard question because we can picture it, but we simply cannot articulate it.
In Euclid's book Elements, he defined a straight line as "lying evenly between its extreme points" and as having "breadthless width." This definition is pretty useless. What does he mean by "lying evenly"? It tells us nothing about how to describe or construct a straight line.
So what is a straightness anyway? There are a few good answers. For instance, in the Cartesian CoordinatesA Cartesian coordinate system specifies each point uniquely in a plane by a pair of numerical coordinates, which are the signed distances from the point to two fixed perpendicular directed lines, measured in the same unit of length., the graph of $y=ax+b$ is a straight line as shown in Image 1. In addition, the shortest distance between two points on a flat plane is a straight line, a definition we are most familiar with. However, it is important to realize that the definitions of being "shortest" and "straight" will change when you are no longer on flat plane. For example, the shortest distance between two points on a sphere is the the "great circle" as shown in Image 2, a section of a sphere that contains a diameter of the sphere, and great circle is straight on the spherical surface.
Since we are dealing with plane geometry here, we define straight line as the curve of $y=ax+b$ in Cartesian Coordinates.
For more properties on straight line, you can refer to the book Experiencing Geometry by David W. Henderson.
Categories: | |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 215, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9260423183441162, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-statistics/142359-normal-distribution-please-help-asap-2-a.html
|
# Thread:
1. ## Normal distribution - please help ASAP! 2
3) Suppose that a 100-point test (scores are whole numbers) is administered to every high school student in the
USA at the start of their senior year and that the scores on this test are normally distributed with a mean of 70
and standard deviation of 10. If 5 scores are selected at random, what is the probability that exactly 3 of these
scores are between 65 and 75, inclusive?
4) Let X be a normal random variable with mean μ and standard deviation σ. Show that the expected value and
variance of the quantity x-μ/σ (this is x-μ over σ) are 0 and 1, respectively.
2. Originally Posted by MiyuCat
3) Suppose that a 100-point test (scores are whole numbers) is administered to every high school student in the
USA at the start of their senior year and that the scores on this test are normally distributed with a mean of 70
and standard deviation of 10. If 5 scores are selected at random, what is the probability that exactly 3 of these
scores are between 65 and 75, inclusive?
You need to find the probability $a$ such that $P(65\leq X\leq 75) = P\left(\frac{65-100}{10}\leq Z\leq \frac{75-100}{10}\right)= \dots = a$
Then find $P(Y=3)$ where $Y$ is binomial with $p= a, n= 5$
3. Originally Posted by MiyuCat
[snip]
4) Let X be a normal random variable with mean μ and standard deviation σ. Show that the expected value and
variance of the quantity x-μ/σ (this is x-μ over σ) are 0 and 1, respectively.
Set up the required integrals, then make the substitution z=x-μ/σ and use standard results.
Alternatively, use the following well known theorems:
E(aX + b) = aE(X) + b and Var(aX + b) = a^2Var(X).
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9221324920654297, "perplexity_flag": "head"}
|
http://mathhelpforum.com/differential-geometry/158711-derivative-0-constant-function.html
|
# Thread:
1. ## Derivative 0 => constant function
Suppose that M, N are smooth manifolds and that M is connected. Let $f: M \rightarrow N$ be differentiable so that $d_x f = 0$ for all $x \in M$.
Show that f must be constant.
Anyone got any hint? Why do we need M to be connected?
2. If we don't require M to be connected, then we can let $M=[0,1] \cup [2,3] ~, ~ N = [0,1]$ and $f(x) = \left\{<br /> \begin{array}{lr}<br /> 1 & x \in [0,1]\\<br /> 0 & x \in [2,3]<br /> \end{array}<br /> \right.$
3. Ok, I see. Now how to start proving this?
4. Use proof by contradiction. Suppose that $f(x)\ne f(y)$ and show that this leads to a non-zero derivative.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9451259970664978, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/301244/diophantine-equations-perfect-square-and-perfect-cube-related
|
# Diophantine equations - Perfect square and Perfect cube related
Solve following Diophantine equations:
$1) \ a^3-a^2+8=b^2$
2) \$a, \ b,\ c \in \mathbb{Z^+}$$$\frac{a^3}{(b+3)(c+3)} + \frac{b^3}{(c+3)(a+3)} + \frac{c^3}{(a+3)(b+3)} = 7$$
3) $a^3-8=b^2$
In Problem 2 I tried to use inequality, then I can 'limit' that: $25 \ge a+b+c$ and $a^3 + b^3 + c^3 \ge 112$
Please use elementary way to solve it, I haven't studied elliptic curve yet, thanks.
-
As a hint for Problem 1), try adding 4 to both sides of the equation and see what happens... – Mike Bennett Feb 12 at 19:40
Sorry professor but could you tell me more detail,I am not good in diophantine equations ? I have tried add 4 to both sides before but it led to $(x+2)(x^2 - 3x + 6)=y^2 + 4$ I also have $12 | y, x \equiv 2 \ (mod\ 3)$ but seem didn't help. – D3r0X4 Feb 13 at 9:44
1
Next step : which primes can divide $y^2+4$? – Mike Bennett Feb 13 at 20:15
## 1 Answer
(2) Extremely ugly solution.
You have
$$a^3(a+3)+b^3(b+3)+c^3(c+3)=7(a+3)(b+3)(c+3)$$
or
$$(x-3)^3x+(y-3)^3y+(z-3)^3z=7xyz$$
Since it is symmetric, we can look for the solutions where $x \geq y \geq z$.
It is easy to check that for $x>15$ we have $(x-3)^3>7x^2$. This shows that $4 \leq x \leq 14$. For each particular $x$ you get a simpler solution which can be solved the same way.
P.S. It probably also helps observing that modulo 3 you have
$$(w-3)^3w \equiv 0,1 \pmod 3$$
Then, if none of $x,y,z$ is divisible by $3$ the LHS is 0 $\pmod 3$ which is not possible.
If one of $x,y,z$ is divisible by $3$, then all of them must be divisible by 3, and looking at the equation it follows that one of them is divisible by 9.
This should solve the equation.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9494689702987671, "perplexity_flag": "head"}
|
http://mathematica.stackexchange.com/questions/17176/how-to-create-help-for-a-function-as-documentation?answertab=active
|
# How to create 'help' ? for a function (as documentation)
I have defined a function `PassVeltB` in some complicated way. But now I would like to add a little documentation that gives the user information on how it is to be used.
I know that the `?` symbol could be placed before a built-in function to generate useful information about that built-in function. Is there a way I can add helpful information about my `PassVeltB` function in a similar way?
-
## 2 Answers
You can include usage info using the `::usage` tag as follows
````foo::usage = "foo[x] takes one argument and returns nothing"
````
Using `Information[foo]` or `?foo` will display the string in the above message
````?foo
(* "foo[x] takes one argument and returns nothing" *)
````
In addition, in version 8 (and some older versions), using CmdShiftK will complete the template if you start the usage message with `foo[x] ...`. Unfortunately, this doesn't work in version 9.
-
The simplest way for a short documentation is by adding a usage to a function, as described in the other answer above.
For a more in depth documentation (as in the Mathematica help) you can use Wolfram Workbench, a tutorial can be found there
http://www.wolfram.co.uk/products/workbench/.
Alternatively you can have a look at this answer.
http://stackoverflow.com/questions/6574710/integrating-notebooks-to-mathematicas-documentation-center
-
1
it might be the best way, but for sure not the simplest. – Yves Klett Jan 3 at 8:51
Yes I agree, I modified my answer accordingly. – Faysal Aberkane Jan 3 at 10:10
lang-mma
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.842789888381958, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/228801/solutions-of-legendre-equation-for-vert-x-vert-leq-1
|
# Solutions of legendre equation for $\vert x\vert \leq 1$
Why books say that is necessary in Legendre equation to have $l$ integer if you want regular solutions in $\vert x\vert \leq 1$. It seems not necessary.
Thanks in advance.
-
"It seems not necessary." - what gave you the idea? Both kinds of Legendre functions will necessarily display singularities at $\pm 1$, with the notable exception of the Legendre polynomials (and that bit has a whole lot to do with the integer restriction)... – J. M. Nov 8 '12 at 6:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9244663715362549, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/discrete-math/10473-how-many-vertices-does-graph-have.html
|
# Thread:
1. ## how many vertices does this graph have.....
Hello,
plz try to solve this question.
QUESTION:
Suppose that a connected planer simple graph has 25 edges. If a plane drawing of this graph has 10 faces, how many vertices does this graph have?
2. Originally Posted by m777
Hello,
plz try to solve this question.
QUESTION:
Suppose that a connected planer simple graph has 25 edges. If a plane drawing of this graph has 10 faces, how many vertices does this graph have?
Such a graph satisfies Euler's Polyhedron formula:
n-m+f=2
where n is the number of vertices, m the number of edges, and f the
number of faces. So:
n=2+25-10=17.
RonL
3. Originally Posted by CaptainBlack
Such a graph satisfies Euler's Polyhedron formula:
n-m+f=2
where n is the number of vertices, m the number of edges, and f the
number of faces. So:
n=2+25-10=17.
RonL
Is this just pretty much the same as Euler's Relationship:
R + N = A + 2 ?
4. Originally Posted by anthmoo
Is this just pretty much the same as Euler's Relationship:
R + N = A + 2 ?
I am not sure what you are saying but there is:
1)Euler Formula for Planar Graphs.
2)Euler Formula for Polyhedra.
These two formulas are the same.
#1 is stronger than #2.
Because I believe one way of proving #2 is from #1.
5. Originally Posted by CaptainBlack
Such a graph satisfies Euler's Polyhedron formula:
n-m+f=2
where n is the number of vertices, m the number of edges, and f the
number of faces. So:
n=2+25-10=17.
RonL
is there a way of finding the number of faces (or regions as its called in our classnotes) from just looking at a graph? eg. how many faces/regions does $K_{3,3}$ have?
#### Search Tags
View Tag Cloud
Copyright © 2005-2013 Math Help Forum. All rights reserved.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.926850438117981, "perplexity_flag": "middle"}
|
http://en.wikipedia.org/wiki/Exponential_map
|
# Exponential map
The exponential map of the Earth as viewed from the north pole is the polar azimuthal equidistant projection in cartography.
In differential geometry, the exponential map is a generalization of the ordinary exponential function of mathematical analysis to all differentiable manifolds with an affine connection. Two important special cases of this are the exponential map for a manifold with a Riemannian metric, and the exponential map from a Lie algebra to a Lie group.
## Definition
Let M be a differentiable manifold and p a point of M. An affine connection on M allows one to define the notion of a geodesic through the point p.[1]
Let v ∈ TpM be a tangent vector to the manifold at p. Then there is a unique geodesic γv satisfying γv(0) = p with initial tangent vector γ′v(0) = v. The corresponding exponential map is defined by expp(v) = γv(1). In general, the exponential map is only locally defined, that is, it only takes a small neighborhood of the origin at TpM, to a neighborhood of p in the manifold. This is because it relies on the theorem on existence and uniqueness for ordinary differential equations which is local in nature. An affine connection is called complete if the exponential map is well-defined at every point of the tangent bundle.
## Lie theory
Lie groups
• General linear GL(n)
• Special linear SL(n)
• Orthogonal O(n)
• Special orthogonal SO(n)
• Unitary U(n)
• Special unitary SU(n)
• Symplectic Sp(n)
• Exponential map
• Lie point symmetry
• Simple Lie algebra
Lie groups in physics
Scientists
In the theory of Lie groups, the exponential map is a map from the Lie algebra of a Lie group to the group which allows one to recapture the local group structure from the Lie algebra. The existence of the exponential map is one of the primary justifications for the study of Lie groups at the level of Lie algebras.
The ordinary exponential function of mathematical analysis is a special case of the exponential map when G is the multiplicative group of non-zero real numbers (whose Lie algebra is the additive group of all real numbers). The exponential map of a Lie group satisfies many properties analogous to those of the ordinary exponential function, however, it also differs in many important respects.
### Definitions
Let $G$ be a Lie group and $\mathfrak g$ be its Lie algebra (thought of as the tangent space to the identity element of $G$). The exponential map is a map
$\exp\colon \mathfrak g \to G$
which can be defined in several different ways as follows:
• It is the exponential map of a canonical left-invariant affine connection on G, such that parallel transport is given by left translation.
• It is the exponential map of a canonical right-invariant affine connection on G. This is usually different from the canonical left-invariant connection, but both connections have the same geodesics (orbits of 1-parameter subgroups acting by left or right multiplication) so give the same exponential map.
• It is given by $\exp(X) = \gamma(1)$ where
$\gamma\colon \mathbb R \to G$
is the unique one-parameter subgroup of $G$ whose tangent vector at the identity is equal to $X$. It follows easily from the chain rule that $\exp(tX) = \gamma(t)$. The map $\gamma$ may be constructed as the integral curve of either the right- or left-invariant vector field associated with $X$. That the integral curve exists for all real parameters follows by right- or left-translating the solution near zero.
• If $G$ is a matrix Lie group, then the exponential map coincides with the matrix exponential and is given by the ordinary series expansion:
$\exp (X) = \sum_{k=0}^\infty\frac{X^k}{k!} = I + X + \frac{1}{2}X^2 + \frac{1}{6}X^3 + \cdots$
(here $I$ is the identity matrix).
• If G is compact, it has a Riemannian metric invariant under left and right translations, and the exponential map is the exponential map of this Riemannian metric.
### Examples
• The unit circle centered at 0 in the complex plane is a Lie group (called the circle group) whose tangent space at 1 can be identified with the imaginary line in the complex plane, $\{it:t\in\mathbb R\}.$ The exponential map for this Lie group is given by
$it \mapsto \exp(it) = e^{it} = \cos(t) + i\sin(t),\,$
that is, the same formula as the ordinary complex exponential.
• In the split-complex number plane $z = x + y \jmath , \quad \jmath^2 = +1,$ the imaginary line $\lbrace \jmath t : t \in \mathbb R \rbrace$ forms the Lie algebra of the unit hyperbola group $\lbrace \cosh t + \jmath \ \sinh t : t \in \mathbb R \rbrace$ since the exponential map is given by
$\jmath t \mapsto \exp(\jmath t) = \cosh t + \jmath \ \sinh t.$
• The unit 3-sphere $S^3$ centered at 0 in the quaternions H is a Lie group (isomorphic to the special unitary group $SU(2)$) whose tangent space at 1 can be identified with the space of purely imaginary quaternions, $\{it+ju + kv :t, u, v\in\mathbb R\}.$ The exponential map for this Lie group is given by
$\bold{w} = (it+ju+kv) \mapsto \exp(it+ju+kv) = \cos(|\bold{w}|) + \sin(|\bold{w}|)\frac{\bold{w}}{|\bold{w}|}.\,$
This map takes the 2-sphere of radius $R$ inside the purely imaginary quaternions to $\{s\in S^3 \subset \bold{H}: \operatorname{Re}(s) = \cos(R)\} =$ a 2-sphere of radius $\sin(R)$ when $R\not\equiv 0\pmod{2\pi}$. Compare this to the first example above.
### Properties
• For all $X\in\mathfrak g$, the map $\gamma(t) = \exp(tX)$ is the unique one-parameter subgroup of $G$ whose tangent vector at the identity is $X$. It follows that:
• $\exp(t+s)X = (\exp tX)(\exp sX)\,$
• $\exp(-X) = (\exp X)^{-1}.\,$
• The exponential map $\exp\colon \mathfrak g \to G$ is a smooth map. Its derivative at the identity, $\exp_{*}\colon \mathfrak g \to \mathfrak g$, is the identity map (with the usual identifications). The exponential map, therefore, restricts to a diffeomorphism from some neighborhood of 0 in $\mathfrak g$ to a neighborhood of 1 in $G$.
• The exponential map is not, however, a covering map in general – it is not a local diffeomorphism at all points. For example, so(3) to SO(3) is not a covering map; see also cut locus on this failure.
• The image of the exponential map always lies in the identity component of $G$. When $G$ is compact, the exponential map is surjective onto the identity component.
• The image of the exponential map of the connected but non-compact group SL2(R) is not the whole group. Its image consists of C-diagonalizable matrices with eigenvalues either positive or with module 1, and of non-diagonalizable trigonalizable matrices with eigenvalue 1.
• The map $\gamma(t) = \exp(tX)$ is the integral curve through the identity of both the right- and left-invariant vector fields associated to $X$.
• The integral curve through $g\in G$ of the left-invariant vector field $X^L$ associated to $X$ is given by $g \exp(t X)$. Likewise, the integral curve through $g$ of the right-invariant vector field $X^R$ is given by $\exp(t X) g$. It follows that the flows $\xi^{L,R}$ generated by the vector fields $X^{L,R}$ are given by:
• $\xi^L_t = R_{\exp tX}$
• $\xi^R_t = L_{\exp tX}.$
Since these flows are globally defined, every left- and right-invariant vector field on $G$ is complete.
• Let $\phi\colon G \to H$ be a Lie group homomorphism and let $\phi_{*}$ be its derivative at the identity. Then the following diagram commutes:
• In particular, when applied to the adjoint action of a group $G$ we have
• $g(\exp X)g^{-1} = \exp(\mathrm{Ad}_gX)\,$
• $\mathrm{Ad}_{\exp X} = \exp(\mathrm{ad}_X).\,$
## Riemannian geometry
In Riemannian geometry, an exponential map is a map from a subset of a tangent space TpM of a Riemannian manifold (or pseudo-Riemannian manifold) M to M itself. The (pseudo) Riemannian metric determines a canonical affine connection, and the exponential map of the (pseudo) Riemannian manifold is given by the exponential map of this connection.
### Properties
Intuitively speaking, the exponential map takes a given tangent vector to the manifold, runs along the geodesic starting at that point and going in that direction, for a unit time. Since v corresponds to the velocity vector of the geodesic, the actual (Riemannian) distance traveled will be dependent on that. We can also reparametrize geodesics to be unit speed, so equivalently we can define expp(v) = β(|v|) where β is the unit-speed geodesic (geodesic parameterized by arc length) going in the direction of v. As we vary the tangent vector v we will get, when applying expp, different points on M which are within some distance from the base point p—this is perhaps one of the most concrete ways of demonstrating that the tangent space to a manifold is a kind of "linearization" of the manifold.
The Hopf–Rinow theorem asserts that it is possible to define the exponential map on the whole tangent space if and only if the manifold is complete as a metric space (which justifies the usual term geodesically complete for a manifold having an exponential map with this property). In particular, compact manifolds are geodesically complete. However even if expp is defined on the whole tangent space, it will in general not be a global diffeomorphism. However, its differential at the origin of the tangent space is the identity map and so, by the inverse function theorem we can find a neighborhood of the origin of TpM on which the exponential map is an embedding (i.e., the exponential map is a local diffeomorphism). The radius of the largest ball about the origin in TpM that can be mapped diffeomorphically via expp is called the injectivity radius of M at p. The cut locus of the exponential map is, roughly speaking, the set of all points where the exponential map fails to have a unique minimum.
An important property of the exponential map is the following lemma of Gauss (yet another Gauss's lemma): given any tangent vector v in the domain of definition of expp, and another vector w based at the tip of v (hence w is actually in the double-tangent space Tv(TpM)) and orthogonal to v, remains orthogonal to v when pushed forward via the exponential map. This means, in particular, that the boundary sphere of a small ball about the origin in TpM is orthogonal to the geodesics in M determined by those vectors (i.e., the geodesics are radial). This motivates the definition of geodesic normal coordinates on a Riemannian manifold.
The exponential map is also useful in relating the abstract definition of curvature to the more concrete realization of it originally conceived by Riemann himself—the sectional curvature is intuitively defined as the Gaussian curvature of some surface (i.e., a slicing of the manifold by a 2-dimensional submanifold) through the point p in consideration. Via the exponential map, it now can be precisely defined as the Gaussian curvature of a surface through p determined by the image under expp of a 2-dimensional subspace of TpM.
## Relationships
In the case of Lie groups with a bi-invariant metric—a pseudo-Riemannian metric invariant under both left and right translation—the exponential maps of the pseudo-Riemannian structure are the same as the exponential maps of the Lie group. In general, Lie groups do not have a bi-invariant metric, though all connected semi-simple (or reductive) Lie groups do. The existence of a bi-invariant Riemannian metric is stronger than that of a pseudo-Riemannian metric, and implies that the Lie algebra is the Lie algebra of a compact Lie group; conversely, any compact (or abelian) Lie group has such a Riemannian metric.
Take the example that gives the "honest" exponential map. Consider the positive real numbers R+, a Lie group under the usual multiplication. Then each tangent space is just R. On each copy of R at the point y, we introduce the modified inner product
$\langle u,v\rangle_y = \frac{uv}{y^2}$
(multiplying them as usual real numbers but scaling by y2). (This is what makes the metric left-invariant, for left multiplication by a factor will just pull out of the inner product, twice — canceling the square in the denominator).
Consider the point 1 ∈ R+, and x ∈ R an element of the tangent space at 1. The usual straight line emanating from 1, namely y(t) = 1 + xt covers the same path as a geodesic, of course, except we have to reparametrize so as to get a curve with constant speed ("constant speed", remember, is not going to be the ordinary constant speed, because we're using this funny metric). To do this we reparametrize by arc length (the integral of the length of the tangent vector in the norm $|\cdot|_y$ induced by the modified metric):
$s(t) = \int_0^t |x|_{y(\tau)} d\tau = \int_0^t \frac{|x|}{1 + \tau x} d\tau = |x| \int_0^t \frac{d\tau}{1 + \tau x} = \frac{|x|}{x} \ln|1 + tx|$
and after inverting the function to obtain t as a function of s, we substitute and get
$y(s)=e^{sx/|x|}$
Now using the unit speed definition, we have
$\exp_1(x)=y(|x|_1)=y(|x|)$,
giving the expected ex.
The Riemannian distance defined by this is simply
$\operatorname{dist}(a,b) = |\ln(b/a)|$,
a metric which should be familiar to anyone who has drawn graphs on log paper.
## References
• do Carmo, Manfredo P. (1992), Riemannian Geometry, Birkhäuser, ISBN 0-8176-3490-8 . See Chapter 3.
• Cheeger, Jeff; Ebin, David G. (1975), Comparison Theorems in Riemannian Geometry, Elsevier . See Chapter 1, Sections 2 and 3.
• Hazewinkel, Michiel, ed. (2001), "Exponential mapping", , Springer, ISBN 978-1-55608-010-4
• Helgason, Sigurdur (2001), Differential geometry, Lie groups, and symmetric spaces, Graduate Studies in Mathematics 34, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-2848-9, MR 1834454 .
• Kobayashi, Shoshichi; Nomizu, Katsumi (1996), , Vol. 1 (New ed.), Wiley-Interscience, ISBN 0-471-15733-3 .
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 65, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8736060261726379, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/97288?sort=oldest
|
## Primes with more ones than zeroes in their Binary expansion
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
This question is also motivated by the developement around my old MO question about Mobius randomness. It is also motivated by Joe O'Rourke's question on finding primes in sparse sets.
Let $A$ be the set of all natural numbers with more ones than zeroes in their binary expansion. Are there infinitely many primes in $A$?
More generally, for a function $f(n)$ defined on the natural numbers let $A[f]$ denotes the set of integers with $n$ digits and at least $n/2+f(n)$ ones, for $n=1,2,...$. Does $A[f]$ contains infinitely many primes?
Bourgain proved the Mobius randomness of $A$ and this seems closely related to this question. But I am not sure about the exact connection. (In fact Bourgain proved Mobius randomness for every $A$ described by a balanced monotone Boolean function of the binary digits.)
Showing infinitely many primes for sparse $[f]$ would be interesting. Proving this for $f(n)=\alpha n$ where $\alpha>0$ is small would be terrific. Of course, if $f(n)=n/2$ we are talking about Mersenne's prime so I would not expect an answer here. (Showing infinite primes for $A$ with smaller size than $\sqrt n$ will cross some notable barrier.)
A similar question can be asked about balanced (and unbaland) sets described by $AC^0$-formulas. This corresponds to Ben Green's $AC^0$ prime number theorem but also here I am not sure what it will take to move from Mobius randomness to infinitude of primes.
Another related question: http://mathoverflow.net/questions/22629/are-there-primes-of-every-hamming-weight
-
23
This is probably not the answer you were looking for and certainly isn't meant as an answer, but for what it's worth, it seems this would follow from applying Goldbach's conjecture to numbers of the form $2^n$ and noting that if two odd numbers add up to $2^n$, then at least one of them has more ones than zeroes in its binary representation. Goldbach's conjecture is open, but just saying. – unknown (google) May 18 2012 at 10:47
hope you don't mind the comment! :) – unknown (google) May 18 2012 at 10:57
1
The result is also true assuming the Heath-Brown conjecture that the smallest prime in an arithmetic progression is $\le d^2$, where $d$ is the common difference of the progression. Unfortunately with this approach, the best unconditional result is $\alpha\approx 1/5$. – Gjergji Zaimi May 18 2012 at 14:44
1
@unknown: a very nice comment, but will this necessarily produce infinitely many primes? @Gil: who is Eric (I assume not the demi-bee)? – Igor Rivin May 18 2012 at 15:01
4
So now we have an answer conditional on Goldbach's conjecture, an answer conditional on the GRH, and an answer conditional on Legendre's conjecture (strengthening the result on Cramer). How strange... – Charles May 18 2012 at 17:30
show 7 more comments
## 4 Answers
I suppose this will follow from the plausible Cramer's conjecture about prime gap of $O(\log^2{p_n})$.
Let $n=(2^k-1)2^m$ with $m < k$ and $2^m>C\log^2{n}$ (actually $m > \log{(C\log^2({2^m(2^k-1))}}/\log{(2)}$ will do.)
$n$ has $k$ ones and (much) less zeros. The interval $(n,n+C\log^2{n})$ with $2^m>C\log^2{n}$ will contain a prime $p=n+\delta$. $\delta$ will contribute at least one $1$ to the zero bits of $n$ keeping all of the ones. There are infinitely many choices for $m,k$ producing distinct primes..
Legendre conjecture probably will do too.
Note that the doubly logarithmic choice of $m$ contributes relatively few zeros.
Added So if you believe Cramer's conjecture, there are infinitely many $n$ for which $n$ bit primes have only $O(\log{n})$ zeros in their Binary expansion.
(btw, I would be very interested in unconditional answer to the question).
-
2
Legendre's conjecture works since it implies that there is a prime in [(16^n-1)^2, (16^n)^2]. Any such prime is an 8n-bit number starting with 4n-1 ones and ending with a 1. But not all the remaining digits can be 0, since that number is divisible by 3. Thus at least 4n+1 out of 8n bits are 1. – Charles May 18 2012 at 15:06
Thanks Charles. Can the RH gap bound of $O(\sqrt{x}\log{x})$ prove this question? – joro May 18 2012 at 15:21
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
If Linnik's constant is two, as conjectured (and something like that follows from GRH), then the least prime congruent to $2^n-1$ modulo $2^n$ has more ones than zeros in its binary expansion. Since Linnik's constant is down to $5.2$ unconditionally, you get something a bit weaker unconditionally too.
-
1
"On Linnik's constant" arxiv.org/abs/0906.2749 contains the $L=5.2$ result. – Joseph O'Rourke May 18 2012 at 17:38
The OP's conjecture follows from theorem 1.1 in "Primes with an average sum of digits" by Drmota, Mauduit and Rivat. They prove that the number of primes $\le x$ with $k$ binary digits is given by $$\frac{\pi(x)}{\sqrt{(\pi/2) \log x }}\left(e^{-\frac{2(k-\frac{1}{2}\log x)^2}{\log x}}+O(\log x^{-\frac{1}{2}+\epsilon})\right).$$ So inparticular this shows the stronger result in your question corresponding to the function $f(n)=\alpha \sqrt{n}$. (So one has infinitely many primes with $\frac{n}{2}+\alpha\sqrt{n}$ ones in their binary expansion.) The authors say that they weren't able to get any bounds for $f(n)=\alpha n$ with any $\alpha > 0$.
-
1
The thread mathoverflow.net/questions/22629/… contains an excellent answer of Ben Green's giving a very rough overview of the method used in the paper by M. Drmota, C. Mauduit and J. Rivat, as well as other interesting comments related to Gil's question. – Daniel m3 May 19 2012 at 18:25
We can take $f(n)=\alpha n$ for any $\alpha<0.7375$. In particular, the set of primes with more than twice as many ones that zeros in their binary expansion is infinite.
I posted a short article on the arXiv which deals with exactly this kind of problem. Let $s_2(n)$ denote the sum of digits base $2$. Since $x$ has approximately $\log_2(x)$ binary digits, we are looking at when $s_2(n)\geq \alpha \log_2 (n)$. In that 4 page note we prove that
$$\left|\left\{ p\leq x,\ p\ \text{prime}\ : s_2(n)\geq \alpha\log_2(x) \right\} \right|\gg_{\epsilon}\ x^{2\left(1-\alpha\right)}e^{-c\left(\log x\right)^{1/2+\epsilon}}.$$
Moreover, such a result extends naturally to base $q$, yielding the bound
$$\left|\left\{ p\leq x,\ p\ \text{prime}\ :\ s_{q}(p)\geq\alpha(q-1)\log_{q}(x)\right\} \right|\gg_{\epsilon}\ x^{2\left(1-\alpha\right)}e^{-c\left(\log x\right)^{1/2+\epsilon}}$$ where $s_q(n)$ is the sum of digits of $n$ in base $q$.
The proof takes advantage of the fact that the multinomial distribution is sharply peaked. The number $0.7375$ appears because $1-0.525/2=0.7375$, and $0.525$ is the exponent appearing in Baker Harman and Pintz's work on prime gaps.
Edit: At some point, I deleted my answer because I was unsatisfied with it. It has now been improved significantly.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 66, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9499735236167908, "perplexity_flag": "head"}
|
http://www.citizendia.org/Geometric_progression
|
Diagram showing the geometric series 1 + 1/2 + 1/4 + 1/8 + . . . which converges to 2.
In mathematics, a geometric progression, also known as a geometric sequence, is a sequence of numbers where each term after the first is found by multiplying the previous one by a fixed non-zero number called the common ratio. Mathematics is the body of Knowledge and Academic discipline that studies such concepts as Quantity, Structure, Space and In Mathematics, a sequence is an ordered list of objects (or events A number is an Abstract object, tokens of which are Symbols used in Counting and measuring. For example, the sequence 2, 6, 18, 54, . . . is a geometric progression with common ratio 3 and 10, 5, 2. 5, 1. 25, . . . is a geometric sequence with common ratio 1/2. The sum of the terms of a geometric progression is known as a geometric series.
Thus, the general form of a geometric sequence is
$a,\ ar,\ ar^2,\ ar^3,\ ar^4,\ \ldots$
and that of a geometric series is
$a + ar + ar^2 + ar^3 + ar^4 + \ldots$
where r ≠ 0 is the common ratio and a is a scale factor, equal to the sequence's start value. A scale factor is a number which scales, or multiplies some quantity
## Elementary properties
The n-th term of a geometric sequence with initial value a and common ratio r is given by
$a_n = a\,r^{n-1}$
Such a geometric sequence also follows the recursive relation
$a_n = r\,a_{n-1}$ for every integer $n\geq 1$
Generally, to check whether a given sequence is geometric, one simply checks whether successive entries in the sequence all have the same ratio. "Difference equation" redirects here It should not be confused with a Differential equation.
The common ratio of a geometric series may be negative, resulting in an alternating sequence, with numbers switching from positive to negative and back. For instance
1, -3, 9, -27, 81, -243, . . .
is a geometric sequence with common ratio -3.
The behaviour of a geometric sequence depends on the value of the common ratio.
If the common ratio is:
• Positive, the terms will all be the same sign as the initial term.
• Negative, the terms will alternate between positive and negative.
• Greater than 1, there will be exponential growth towards positive infinity. Exponential growth (including Exponential decay) occurs when the growth rate of a mathematical function is proportional to the function's current value Infinity (symbolically represented with ∞) comes from the Latin infinitas or "unboundedness
• 1, the progression is a constant sequence.
• Between -1 and 1 but not zero, there will be exponential decay towards zero. A quantity is said to be subject to exponential decay if it decreases at a rate proportional to its value
• −1, the progression is an alternating sequence (see alternating series)
• Less than −1, there will be exponential growth towards infinity (positive and negative). In Mathematics, an alternating series is an Infinite series of the form \sum_{n=0}^\infty (-1^n\a_n with an Exponential growth (including Exponential decay) occurs when the growth rate of a mathematical function is proportional to the function's current value Infinity (symbolically represented with ∞) comes from the Latin infinitas or "unboundedness
Geometric sequences (with common ratio not equal to -1,1 or 0) show exponential growth or exponential decay, as opposed to the Linear growth (or decline) of an arithmetic progression such as 4, 15, 26, 37, 48, . Exponential growth (including Exponential decay) occurs when the growth rate of a mathematical function is proportional to the function's current value A quantity is said to be subject to exponential decay if it decreases at a rate proportional to its value The word linear comes from the Latin word linearis, which means created by lines. In Mathematics, an arithmetic progression or arithmetic sequence is a Sequence of Numbers such that the difference of any two successive members . . (with common difference 11). This result was taken by T.R. Malthus as the mathematical foundation of his Principle of Population. Thomas Robert Malthus FRS (13 February 1766 – 23 December 1834 was an English political economist and demographer who expressed views Note that the two kinds of progression are related: exponentiating each term of an arithmetic progression yields a geometric progression, while taking the logarithm of each term in a geometric progression with a positive common ratio yields an arithmetic progression. In Mathematics, the logarithm of a number to a given base is the power or Exponent to which the base must be raised in order to produce
## Geometric series
Main article: Geometric series
A geometric series is the sum of the numbers in a geometric progression:
$\sum_{k=0}^{n} ar^k = ar^0+ar^1+ar^2+ar^3+\cdots+ar^n \,$
We can find a simpler formula for this sum by multiplying both sides of the above equation by (1 − r), and we'll see that
$(1-r) \sum_{k=0}^{n} ar^k = a-ar^{n+1}\,$
since all the other terms cancel. In Mathematics, a geometric series is a series with a constant ratio between successive terms. Rearranging (for $r\ne1$) gives the convenient formula for a geometric series:
$\sum_{k=0}^{n} ar^k = \frac{a(1-r^{n+1})}{1-r}$
Note: If one were to begin the sum not from 0, but from a higher term, say m, then
$\sum_{k=m}^n ar^k=\frac{a(r^m-r^{n+1})}{1-r}$
Differentiating this formula with respect to r allows us to arrive at formulae for sums of the form
$\sum_{k=0}^n k^s r^k$
For example:
$\frac{d}{dr}\sum_{k=0}^nr^k = \sum_{k=0}^nkr^{k-1}=\frac{1-r^{n+1}}{(1-r)^2}-\frac{(n+1)r^n}{1-r}$
For a geometric series containing only even powers of r multiply by (1 − r2):
$(1-r^2) \sum_{k=0}^{n} ar^{2k} = a-ar^{2n+2}$
Then
$\sum_{k=0}^{n} ar^{2k} = \frac{a(1-r^{2n+2})}{1-r^2}$
For a series with only odd powers of r
$(1-r^2) \sum_{k=0}^{n} ar^{2k+1} = ar-ar^{2n+3}$
and
$\sum_{k=0}^{n} ar^{2k+1} = \frac{ar(1-r^{2n+2})}{1-r^2}$
### Infinite geometric series
Main article: Geometric series
An infinite geometric series is an infinite series whose successive terms have a common ratio. In Calculus, a branch of mathematics the derivative is a measurement of how a function changes when the values of its inputs change In Mathematics, a geometric series is a series with a constant ratio between successive terms. In Mathematics, a series is often represented as the sum of a Sequence of terms That is a series is represented as a list of numbers with Such a series converges if and only if the absolute value of the common ratio is less than one ( | r | < 1 ). ↔ In Mathematics, the absolute value (or modulus) of a Real number is its numerical value without regard to its sign. Its value can then be computed from the finite sum formulae
$\sum_{k=0}^\infty ar^k = \lim_{n\to\infty}{\sum_{k=0}^{n} ar^k} = \lim_{n\to\infty}\frac{a(1-r^{n+1})}{1-r}= \lim_{n\to\infty}\frac{a}{1-r} - \lim_{n\to\infty}{\frac{ar^{n+1}}{1-r}}$
Since:
$r^\infty = 0$ (when | r |<1).
Then:
$\sum_{k=0}^\infty ar^k = \frac{a}{1-r} - 0 = \frac{a}{1-r}$
For example, using numerical values
$\sum_{k=0}^\infty (191) \left(\frac{6}{7}\right)^k = \frac{191}{1-\frac{6}{7}} = 1337$
For a series containing only even powers of r,
$\sum_{k=0}^\infty ar^{2k} = \frac{a}{1-r^2}$
and for odd powers only,
$\sum_{k=0}^\infty ar^{2k+1} = \frac{ar}{1-r^2}$
In cases where the sum does not start at k = 0,
$\sum_{k=m}^\infty ar^k=\frac{ar^m}{1-r}$
Above formulae are valid only for | r | < 1. The latter formula is actually valid in every Banach algebra, as long as the norm of r is less than one, and also in the field of p-adic numbers if | r |p < 1. In Mathematics, especially Functional analysis, a Banach algebra, named after Stefan Banach, is an Associative algebra A over the In Mathematics, the p -adic number systems were first described by Kurt Hensel in 1897 As in the case for a finite sum, we can differentiate to calculate formulae for related sums. For example,
$\frac{d}{dr}\sum_{k=0}^\infty r^k = \sum_{k=0}^\infty kr^{k-1}=\frac{1}{(1-r)^2}$
This formula only works for | r | < 1 as well. From this, it follows that, for | r | < 1,
$\sum_{k=0}^{\infty} k r^k = \frac{r}{\left(1-r\right)^2} \,;\, \sum_{k=0}^{\infty} k^2 r^k = \frac{r \left( 1+r \right)}{\left(1-r\right)^3} \, ; \, \sum_{k=0}^{\infty} k^3 r^k = \frac{r \left( 1+4 r + r^2\right)}{\left( 1-r\right)^4}$
Also, the infinite series 1/2 + 1/4 + 1/8 + 1/16 + · · · is an elementary example of a series that converges absolutely. In Mathematics, a series is often represented as the sum of a Sequence of terms That is a series is represented as a list of numbers with In Mathematics, a series (or sometimes also an Integral) is said to converge absolutely if the sum (or integral of the Absolute value of the
It is a geometric series whose first term is 1/2 and whose common ratio is 1/2, so its sum is
$\frac12+\frac14+\frac18+\frac{1}{16}+\cdots=\frac{1/2}{1-(+1/2)} = 1.$
The inverse of the above series is 1/2 − 1/4 + 1/8 − 1/16 + · · · is a simple example of an alternating series that converges absolutely. In Mathematics, a geometric series is a series with a constant ratio between successive terms. In Mathematics, an alternating series is an Infinite series of the form \sum_{n=0}^\infty (-1^n\a_n with an In Mathematics, a series (or sometimes also an Integral) is said to converge absolutely if the sum (or integral of the Absolute value of the
It is a geometric series whose first term is 1/2 and whose common ratio is −1/2, so its sum is
$\frac12-\frac14+\frac18-\frac{1}{16}+\cdots=\frac{1/2}{1-(-1/2)} = \frac13.$
### Complex numbers
The summation formula for geometric series remains valid even when the common ratio is a complex number. In Mathematics, a geometric series is a series with a constant ratio between successive terms. Complex plane In Mathematics, the complex numbers are an extension of the Real numbers obtained by adjoining an Imaginary unit, denoted This fact can be used to calculate some sums of non-obvious geometric series, such as:
$\sum_{k=0}^{\infty} \frac{\sin(kx)}{r^k} = \frac{r \sin(x)}{1 + r^2 - 2 r \cos(x)}$
The proof of this formula starts with
$\sin(kx) = \frac{e^{ikx} - e^{-ikx}}{2i}$
a consequence of Euler's formula. This article is about Euler's formula in Complex analysis. For Euler's formula in algebraic topology and polyhedral combinatorics see Euler characteristic Substituting this into the series above, we get
$\sum_{k=0}^{\infty} \frac{\sin(kx)}{r^k} = \frac{1}{2 i} \left[ \sum_{k=0}^{\infty} \left( \frac{e^{ix}}{r} \right)^k - \sum_{k=0}^{\infty} \left(\frac{e^{-ix}}{r}\right)^k\right]$.
This is just the difference of two geometric series. From here, it is then a straightforward application of our formula for infinite geometric series to finish the proof.
## Product
The product of a geometric progression is the product of all terms. If all terms are positive, then it can be quickly computed by taking the geometric mean of the progression's first and last term, and raising that mean to the power given by the number of terms. The geometric mean in Mathematics, is a type of Mean or Average, which indicates the central tendency or typical value of a set of numbers (This is very similar to the formula for the sum of terms of an arithmetic sequence: take the arithmetic mean of the first and last term and multiply with the number of terms. In Mathematics, an arithmetic progression or arithmetic sequence is a Sequence of Numbers such that the difference of any two successive members In Mathematics and Statistics, the arithmetic Mean (or simply the mean) of a list of numbers is the sum of all the members of the list divided )
$\prod_{i=0}^{n} ar^i = \left( \sqrt{a_1 \cdot a_{n+1}}\right)^{n+1}$ (if a,r > 0).
Proof:
Let the product be represented by P:
$P=a \cdot ar \cdot ar^2 \cdots ar^{n-1} \cdot ar^{n}$.
Now, carrying out the multiplications, we conclude that
$P=a^{n+1} r^{1+2+3+ \cdots +(n-1)+n}$.
Applying the sum of arithmetic series, the expression will yield
$P=a^{n+1} r^{\frac{n(n+1)}{2}}$. In Mathematics, an arithmetic progression or arithmetic sequence is a Sequence of Numbers such that the difference of any two successive members
$P=(ar^{\frac{n}{2}})^{n+1}$.
We raise both sides to the second power:
$P^2=(a^2 r^{n})^{n+1}=(a\cdot ar^n)^{n+1}$.
Consequently
$P^2=(a_1 \cdot a_{n+1})^{n+1}$ and
$P=(a_1 \cdot a_{n+1})^{\frac{n+1}{2}}$,
which concludes the proof.
## Relationship to geometry and Euclid's work
Books VIII and IX of Euclid's Elements analyze geometric progressions and give several of their properties. Euclid ( Greek:.) fl 300 BC also known as Euclid of Alexandria, is often referred to as the Father of Geometry Euclid's Elements ( Greek:) is a mathematical and geometric Treatise consisting of 13 books written by the Greek
A geometric progression gains its geometric character from the fact that the areas of two geometrically similar plane figures are in "duplicate" ratio to their corresponding sides; further the volumes of two similar solid figures are in "triplicate" ratio of their corresponding sides. Area is a Quantity expressing the two- Dimensional size of a defined part of a Surface, typically a region bounded by a closed Curve. Geometry Two geometrical objects are called similar if one is congruent to the result of a uniform scaling (enlarging or shrinking of the other The volume of any solid plasma vacuum or theoretical object is how much three- Dimensional space it occupies often quantified numerically
The meaning of the words "duplicate" and "triplicate" in the previous paragraph is illustrated by the following examples. Given two squares whose sides have the ratio 2 to 3, then their areas will have the ratio 4 to 9; we can write this as 4 to 6 to 9 and notice that the ratios 4 to 6 and 6 to 9 both equal 2 to 3; so by using the side ratio 2 to 3 "in duplicate" we obtain the ratio 4 to 9 of the areas, and the sequence 4, 6, 9 is a geometric sequence with common ratio 3/2. Similarly, give two cubes whose side ratio is 2 to 5, their volume ratio is 8 to 125, which can be obtained as 8 to 20 to 50 to 125, the original ratio 2 to 5 "in triplicate", yielding a geometric sequence with common ration 5/2.
### Elements, Book IX
The geometric progression 1, 2, 4, 8, 16, 32, . . . (or, in the binary numeral system, 1, 10, 100, 1000, 10000, 100000, . The binary numeral system, or base-2 number system, is a Numeral system that represents numeric values using two symbols usually 0 and 1. . . ) is important in number theory. Number theory is the branch of Pure mathematics concerned with the properties of Numbers in general and Integers in particular as well as the wider classes Book IX, Proposition 36 of Elements proves that if the sum of the first n terms of this progression is a prime number, then this sum times the nth term is a perfect number. In Mathematics, a prime number (or a prime) is a Natural number which has exactly two distinct natural number Divisors 1 In mathematics a perfect number is defined as a positive integer which is the sum of its proper positive Divisors that is the sum of the positive divisors excluding For example, the sum of the first 5 terms of the series (1 + 2 + 4 + 8 + 16) is 31, which is a prime number. The sum 31 multiplied by 16 (the 5th term in the series) equals 496, which is a perfect number.
Book IX, Proposition 35 proves that in a geometric series if the first term is subtracted from the second and last term in the sequence then as the excess of the second is to the first, so will the excess of the last be to all of those before it. (This is a restatement of our formula for geometric series from above. ) Applying this to the geometric progression 31,62,124,248,496 (which results from 1,2,4,8,16 by multiplying all terms by 31), we see that 62 minus 31 is to 31 as 496 minus 31 is to the sum of 31,62,124,248. Therefore the numbers 1,2,4,8,16,31,62,124,248 add up to 496 and further these are all the numbers which divide 496. In Mathematics, a divisor of an Integer n, also called a factor of n, is an integer which evenly divides n without For suppose that P divides 496 and it is not amongst these numbers. Assume P×Q equals 16×31, or 31 is to Q as P is to 16. Now P cannot divide 16 or it would be amongst the numbers 1,2,4,8,16. Therefore 31 cannot divide Q. And since 31 does not divide Q and Q measures 496, the fundamental theorem of arithmetic implies that Q must divide 16 and be amongst the numbers 1,2,4,8,16. In Number theory, the fundamental theorem of arithmetic (or unique-prime-factorization theorem) states that every Natural number greater than 1 can be written Let Q be 4, then P must be 124, which is impossible since by hypothesis P is not amongst the numbers 1,2,4,8,16,31,62,124,248.
## References
• Hall & Knight, Higher Algebra, p. In Mathematics, an arithmetic progression or arithmetic sequence is a Sequence of Numbers such that the difference of any two successive members The exponential function is a function in Mathematics. The application of this function to a value x is written as exp( x) See Harmonic series (music for the (related musical concept In Mathematics, the harmonic series is the Infinite series In Mathematics, a series is often represented as the sum of a Sequence of terms That is a series is represented as a list of numbers with Thomas Robert Malthus FRS (13 February 1766 – 23 December 1834 was an English political economist and demographer who expressed views Hackenbush is a two-player Mathematical game which may be played on any configuration of colored Line segments connected to one another by their endpoints and to the 39, ISBN 81-8116-000-2
• Eric W. Weisstein, Geometric Series at MathWorld. Eric W Weisstein (born March 18, 1969, in Bloomington Indiana) is an Encyclopedist who created and maintains MathWorld MathWorld is an online Mathematics reference work created and largely written by Eric W
## geometric progression
### -noun
1. (analysis) A sequence in which each term except the first is obtained from the previous by multiplying it by a constant value, known as the common ratio of the arithmetic progression.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 38, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9379833340644836, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/139217/area-under-a-curve-difference-between-dy-and-dx/139225
|
# Area under a curve, difference between dy and dx
I am trying to find the area of $y = 1$ and $y = x^\frac{1}{4}$ from 0 to 1 and revolving around $x = 1$
In class we did the problem with respect to y, so from understanding that is taking the "rectangles" from f(y) or the y axis. I was wondering why not just do it with respect to x, it would either be a verticle or horizontal slice of the function but the result would be the same. I was not able to get the correct answer for the problem but I am not sure why.
Also one other question I had about this, is there a hole at 1,1 in the shape? The area being subtracted is defined there so shouldn't that be a hole since we are taking that area away? Both function at 1 are 1.
-
1
It is easier to compute the volume with disks/washers than with cylindrical shells. Since you are revolving around $x=1$, if you try to set it up with respect to $x$ you will be using cylindrical shells; if you set it up with respect to $y$, you are using disks/washers. You can use either procedure to get the correct answer if you do them right, but one usually tries to use the method which is likely to be easier, not the one that is likely to be harder, if given the choice. – Arturo Magidin May 1 '12 at 3:51
1
You are trying to find a volume generated by the strip as it rotates. Your limit will be over the thickness of the strip going to $0$. If the side which corresponds to this thickness is perpendicular to the axis of rotation, then the area is simply given by $\pi \ell^2\Delta$, where $\ell$ is the length/height, whichever is appropriate, and $\Delta$ the thickness; $\ell$ is often just the value of the function. (cont) – Arturo Magidin May 1 '12 at 4:00
1
(cont) If the thickness is parallel to the axis of rotation, then you get a cylindrical shell, and the volume is the more complicated $2\pi r\ell\Delta$, where $r$ is distance to axis, $\ell$ is length/height of the slice (which ever is appropriate), and $\Delta$ the thickness. This usually involves computing two different quantities, $r$ and $\ell$, instead of a single one. Note you don't get a disk by rotating a vertical strip vertically: you get a hollow cylinder. Volumes of disks are easy; volumes of hollow cylinders are hard and not degree 1 on the thickness. – Arturo Magidin May 1 '12 at 4:03
1
Like I said: you can use either one to compute volume, but when the strips are perpendicular to the axis of rotation, it usually leads to a simpler formula for your integration. If the strips are parallel to the axis of rotations, then when you try to compute the volume with an integral it usually leads to a harder formula. You cannot use the formula for disks with strips that are parallel to the axis of rotation, because those strips don't describe a disk when rotated, they describe a hollow cylinder. – Arturo Magidin May 1 '12 at 4:10
1
Yes; imagine a pole in a merry-go-round. The shape it describes is neither a disk (a thick circle) nor a washer (a thick circle with a hole in the middle). It is like a hollow cylinder. So you don't compute the volume of that shape with the formula $\pi r^2\Delta$, because that's the volume of a disk, not the shape being described by the pole going around the merry-go-round. – Arturo Magidin May 1 '12 at 4:14
show 3 more comments
## 1 Answer
I expect you have drawn a picture, and that it is the region below $y=1$, above $y=x^{1/4}$, from $x=0$ to $x=1$ that is being rotated about $x=1$. When you rotate, you get a cylinder with a kind of an upside down bowl carved out of it, very thin in the middle. You have asked similar questions before, so I will be brief.
It is probably easiest to do it by slicing parallel to the $x$-axis. So take a slice of thickness $dy$, at height $y$. We need to find the area of cross-section.
Look at the cross-section. It is a circle with a circle removed. The outer radius is $1$, and the inner radius is $1-x$. So the area of cross-section is $\pi(1^2-(1-x)^2)$. We need to express this in terms of $y$. Note that $x=y^4$. so our volume is $$\int_0^1 \pi\left(1^2-(1-y^4)^2\right)\,dy.$$ I would find it more natural to find the volume of the hollow part, and subtract from the volume of the cylinder.
You could also use shells. Take a thin vertical slice, with base going from $x$ to $x+dx$, and rotate it. At $x$, we are going from $x^{1/4}$ to $1$. The radius of the shell is $1-x$, and therefore the volume is given by $$\int_0^1 2\pi(1-x)(1-x^{1/4})\,dx.$$ Multiply through, and integrate term by term. Not too bad, but slicing was easier.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9555612206459045, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/41675/how-generalized-eigenvalues-combine-into-producing-the-spectral-measure
|
## How “generalized eigenvalues” combine into producing the spectral measure?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hi... I am wondering how 'eigenvalues' that don't lie in my Hilbert space combine into producing the spectral measure. I study probability and I am quite ignorant in the field of spectral analysis of operators on Hilbert spaces so please go easy on me :), yet i tried reading parts of the classical Simon and Reeds "Functional analysis" volume 1, and other books. I feel i am very far from an answer. At least, now i can formulate my question.
The mathematical setting is the following: I consider a general (possibly unbounded) operator $A$ on a Hilbert space H with scalar product $(. , .)$, say $H = L^2( \mathbb{R} )$. $D(A)$ will be a dense domain. I do understand that the spectrum $\sigma(A)$ is defined as the complementary of the resolvent set, and can be broken into continuous, residual and point spectra. In the case of self-adjoint operators (hence closable), a very abstract version the Von-Neumann spectral theorem asserts that $A$ can be diagonalized using a spectral decomposition of the identity. The full setting would look like (cf "Quantum physics for mathematicians" by Leon Takhtajan):
• There are a spectrum-indexed family of projectors $P_\lambda(.)$ $\lambda \in \sigma(A)$. These are the "spectral projectors" and reduce in finite dimension to $P_\lambda(x) = \sum_{ \mu \leq \lambda }(x, e_\mu) e_\mu$, $e_\mu$ being the orthonormal diagonalizing basis. And when $\lambda$ is in the point spectrum: $dP_\lambda(x) = (x, e_\lambda) e_\lambda$
• The image of spectral projectors grows with respect to the spectral parameters so that the following identity is true: $$\forall f \in H, P_\lambda \circ P_\mu(f) = P_{min(\lambda,\mu)}(f)$$
• The behavior regarding the $\lambda$ parameter is such that: $$\forall (f,g) \in H^2, (P_\lambda(f), g) = \mu_{f,g}(]-\infty; \lambda])$$ with $\mu_{f,g}$ a measure. Because of this property, $P_.(f)$ can be seen as a measure on H itself whatever that means.
• The spectral decomposition of the identity (trivial in finite dimension): $$\forall f \in H, f = \int_{-\infty}^\infty dP_\lambda(f)$$
• The spectral decomposition of our operator: $$\forall f \in H, Af = \int_{-\infty}^\infty \lambda d P_\lambda(f)$$ This last one being the generalization of the very basic linear algebra identity valid for hermitian matrices: $$\forall x \in \mathbb{R}^n, A(x) = \sum_{ \lambda \in \sigma(A) } \lambda (x, e_\lambda) e_\lambda$$ It is well known that at $\lambda$ an eigenvalue (in the point spectrum), the spectral measure has a Dirac, as we find ourselves in same situation as the finite dimensional case. I am interested in "generalized eigenfunctions" that are functions not necessarily in $L^2$, but that verify still $Af = \lambda f$ for a certain $\lambda$ in the general spectrum. I am now including two classical examples.
In the case of the "position" operator: $$D(A) = ({ f \in H, \int x^2 f(x)^2 dx < \infty })$$ $$(Af)(x) = x f(x)$$ The spectrum is well known and only continuous: $\sigma(A) = \mathbb{R}$. It is obvious that the operator is already diagonal, and that matter is reflected by the fact that the "generalized eigenfunctions" are Dirac distributions: $$(A \delta_\lambda)(x) = x \delta_\lambda(x) = \lambda \delta_\lambda(x)$$ $$\forall f \in H, Af = \int_{\mathbb{R}} \lambda f(\lambda) \delta_\lambda(.)$$
In the case of $A = -\Delta$: $$D(A) = ({ f \in H, \int f''(x)^2 dx < \infty })$$ The spectrum is well known and only continuous: $\sigma(A) = \mathbb{R}^+$. The operator is diagonalized using the Fourier transform $\mathcal{F}$, and that matter is reflected by the fact that the "generalized eigenfunctions" are complex unitary characters $e^{i\sqrt{\lambda}x}$ and $e^{-i\sqrt{\lambda}x}$. The spectral theorem takes a simple shape thanks to the Fourier transform. Indeed: $$\forall f \in H, \mathcal{F}(Af)(k) = k^2 \mathcal{F}(f)(k)$$ Then: $$\forall f \in H, (Af)(x) = \frac{1}{2 \pi} \int_{\mathbb{R}} k^2 e^{-i k x} \mathcal{F}(f)(k) dk = \frac{1}{2 \pi} \int_{\mathbb{R}^+} \lambda ( e^{-i \sqrt{\lambda} x}\mathcal{F}(f)(\sqrt{\lambda}) - e^{i \sqrt{\lambda} x}\mathcal{F}(f)(-\sqrt{\lambda}) ) d\lambda$$ This last way of writing the operator 'diagonalization' shows that the spectral measure is a superposition of the two types of 'waves' (positively propagating $e^{i\sqrt{\lambda}x}$ and negatively propagating $e^{-i\sqrt{\lambda}x}$) with a weight given by the Fourier transform of f, whatever that really means.
In those two cases, we see that those "generalized eigenfunctions" (Diracs and unitary complex characters) combine in a special way in order to produce the spectral measure. I read somewhere a sentence that left me puzzled: `those eigenfunctions combine into a Schwartz kernel`. I think i read that in "Quantum physics for mathematicians" by L. Takhtajan. Now i get the feeling that fully diagonalizing a self-adjoint operator can be a very hard task. Can you also provide me for other references than those i used to far?
In the end, my question could be formulated as the following, even if i am not satisfied with it, as it is still too vague: Suppose that by some mean, i know all or some "generalized eigenfunctions". Then can i express the spectral measure in terms of those eigenfunctions? If so, how?
Side questions:
• It seems natural to ask my generalized eigenfunctions to be a complete orthonormal system, whose cardinal is the cardinal of the spectrum at least (if no multiplicities). It then makes me feel that those functions (or distributions) will lie in a non-separable space as all orthonormal systems is separable spaces are countable. For the same reason, an operator's point spectrum is always countable. This makes me think that generalized eigenfunctions have to be looked for in a very big space with a special topology.
• Why some "generalized eigenfunctions" count and others don't? I am thinking of the Laplacian case, as any $f(x) = e^{zx}, z \in \mathbb{C}$ verifies $f'' = z^2 f$. But clearly, only those with imaginary $z$ count. There are also the harmonic polynomials. Is this related to the fact of being unitary?
I would very much appreciate any references or hints. And i am sorry if my question is not stated in the proper terms of spectral analysis/operator theory.
Phew that was long to type...
Cheers
Reda
-
1
-1 for two points. First of all, $(P_\lambda(f),g)=\mu(]-\infty;\lambda]$ is certainly false. How could it be true for every $f$ and $g$ in $H$ ? Second, you employ $\mu$ once for a measure, then in the next line for a number (in $\min(\lambda,\mu)$). – Denis Serre Oct 10 2010 at 13:50
I don't think it is such a big deal. It's just notation, and the problem is simply solved by renaming the measure $\mu$ as $\mu_{f,g}$. – Martin Argerami Oct 10 2010 at 14:24
True... i'll edit my post... that part i believe is not very clear... Sorry i am no expert and just learned these things... – Reda Oct 10 2010 at 14:41
## 1 Answer
Wouldn't Theorem 4.2 in here answer your question?
-
Wow, thanks... This definitely answers quite a lot... I indeed suspected there was the need for "completing" the space... So i'll read more about those Hilbert-Schmidt riggings... I may be able to better formulate or even answer my side questions... Thanks again – Reda Oct 10 2010 at 15:28
Gee i should be ashamed... That was the first google hit for "generalized eigenfunctions"... I had tried a bunch of things on google, but not that... – Reda Oct 10 2010 at 15:39
2
Actually, that's how I found it. I had never thought of the possibility of finding "eigenfunctions" outside the natural Hilbert space, so I just googled "generalized eigenfunction". – Martin Argerami Oct 10 2010 at 16:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 12, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9334535002708435, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/tagged/curvature?sort=votes&pagesize=30
|
# Tagged Questions
The curvature tag has no wiki summary.
5answers
2k views
### Laplace operator's interpretation
What is your interpretation of Laplace operator? When evaluating Laplacian of some scalar field at a given point one can get a value. What does this value tell us about the field or it's behaviour in ...
1answer
153 views
### Is spacetime flat inside a spherical shell?
In a perfectly symmetrical spherical hollow shell, there is a null net gravitational force according to Newton, since in his theory the force is exactly inversely proportional to the square of the ...
3answers
194 views
### Why Can We Observe Space Curvature / Warping At All?
I don't understand why we are able to see and measure curvature / warping of space at all. Space as I understand it determines distances between objects, so if space were "compressed" or warped, ...
2answers
354 views
### How can a point-like particle “feel” gravity, if locally the curvature of spacetime is always flat?
I imagine a point-like particle can only experience the local properties of spacetime. But locally there is no curvature and no gravity, as it is often stated that Locally, as expressed in the ...
1answer
520 views
### What is the stress energy tensor?
I'm trying to understand the Einstein Field equation equipped only with training in Riemannian geometry. My question is very simple although I cant extract the answer from the wikipedia page: Is the ...
3answers
694 views
### Does the curvature of spacetime theory assume gravity?
Whenever I read about the curvature of spacetime as an explanation for gravity, I see pictures of a sheet (spacetime) with various masses indenting the sheet to form "gravity wells." Objects which are ...
1answer
165 views
### Curvature of the Universe imaginary?
If the curvature of the universe is zero, then $$Ω = 1$$ and the Pythagorean Theorem is correct. If instead $$Ω> 1$$ there will be a positive curvature, and if $$Ω <1$$ there will be a negative ...
1answer
114 views
### Source term of the Einstein field equation
My copy of Feynman's "Six Not-So-Easy Pieces" has an interesting introduction by Roger Penrose. In that introduction (copyright 1997 according to the copyright page), Penrose complains that Feynman's ...
2answers
200 views
### Is the curvature of spacetime invariant? Could it be characterized as the ether?
I'm writing a paper for a Philosophy of Science course about GR/SR and I'm wondering if I can (1) characterize the curvature of spacetime as invariant and (2) argue that this is what Einstein referred ...
1answer
156 views
### Curvature and edge state
If the boundary of quantum hall fluid has non-constant curvature, how will it affect the edge state which is usually described in chiral Luttinger fluid?
3answers
138 views
### How do you tell if a metric is curved?
I was reading up on the Kerr metric (from Sean Carroll's book) and something that he said confused me. To start with, the Kerr metric is pretty messy, but importantly, it contains two constants - ...
2answers
207 views
### Space-time geometry and metric
I am confused in one question in general relativity, why we can always express a space-time geometry only by metric. It means a metric, which is just about distance in tangent space, can tell us all ...
3answers
113 views
### How scalar curvature of following spacetime can be equal to zero?
For an interval of this spacetime, $$ds^{2} = c^{2}dt^{2} - c^{2}t^{2}(d \psi^{2} + sh^{2}(\psi )(d \theta^{2} + sin^{2}(\theta )d \varphi^{2})),$$ scalar curvature is equal to zero. Also, Ricci ...
2answers
79 views
### How is the shape of the universe measured by scientists?
I would like to learn how scientists go about measuring the large-scale curvature of the universe to determine if the universe is closed 'i.e. spherical', flat, or open 'i.e. saddle shaped'. My ...
2answers
183 views
### What is the variation of Gauss-Bonnet term a total derivative of?
What is the variation of Gauss-Bonnet term total derivative of? i.e. Variation of Gauss-Bonnet combination $= \nabla_{\mu} C^{\mu}$. What's $C^{\mu}$ in 4-dimensions?
1answer
141 views
### Does the curvature of space-time cause objects to look smaller than they really are?
What's the difference between looking at a star from a black hole and looking at it from empty space? My guess is that the curvature of space-time distorts the wavelength of light thus changing the ...
0answers
57 views
### gravitational convergence of light
light has a non-zero energy-stress tensor, so a flux of radiation will slightly affect curvature of spacetime Question: assume a flux of radiation in the $z$ direction, in flat Minkowski space it ...
0answers
94 views
### Why does the overhand knot jam but the figure-8 knot doesn't?
After tensioning a rope with an overhand knot in it, it is often very hard if not impossible to untie it; a figure-8 knot, on the other hand, still releases easily. Why is that so? Most "knot and ...
2answers
370 views
### Where do I start with Non-Euclidean Geometry?
I've been trying to grok General Relativity for a while now, and I've been having some trouble. Many physics textbooks gloss over the subject with an "it's too advanced for this medium", and many ...
2answers
222 views
### asymptotic curvature of the universe and correlation with local curvature
There is not-so-rough evidence that at very large scale the universe is flat. However we see everywhere that there are local lumps of matter with positive curvature. So i have several questions ...
4answers
316 views
### Gravitation is not force?
Einstein said that gravity can be looked at as curvature in space- time and not as a force that is acting between bodies. (Actually what Einstein said was that gravity was curvature in space-time and ...
1answer
149 views
### Curvature, Omega, the Flatness problem, and the evolving shape of the universe
I'm a little confused by this: http://en.wikipedia.org/wiki/Flatness_problem Which seems to imply the universe is more curved now than it was soon after the Big Bang. Look at the graph on the right ...
2answers
92 views
### How/why can the cosmic background radiation measurements tell us anything about the curvature of the universe?
So I've read the Wikipedia articles on WMAP and CMB in an attempt to try to understand how scientists are able to deduce the curvature of the universe from the measurements of the CMB. The Wiki ...
1answer
117 views
### Material strain from spacetime curvature
Let's say that you moved an object made of rigid materials into a place with extreme tidal forces. Materials have a modulus of elasticity and a yield strength. Does the corresponding 3D geometric ...
2answers
82 views
### How can I vizualize and understand curved spaces in general relativity?
I'm taking a basic physics class and the teacher described space with a special table that has curves and black holes etc. He would throw a metal ball down onto it and the class would watch it circle ...
0answers
67 views
### Curvature and spacetime
Suppose that it is given that the Riemann curvature tensor in a special kind of spacetime of dimension $d\geq2$ can be written as $$R_{abcd}=k(x^a)(g_{ac}g_{bd}-g_{ad}g_{bc})$$ where $x^a$ is a ...
1answer
173 views
### Difference between $\partial$ and $\nabla$ in general relativity
I read a lot in Road to Reality, so I think I might use some general relativity terms where I should only special ones. In our lectures we just had $\partial_\mu$ which would have the plain partial ...
2answers
114 views
### What is the curvature of the universe?
What is currently the most plausible model of the universe regarding curvature, positive, negative or flat? (I'm sorry if the answer is already out there, but I just can't seem to find it...)
2answers
122 views
### Curved space or curved spacetime?
As I understand it, you can have time + flat space = curved spacetime. So, when one is trying to emphasise that there is a curvature to the space, is it more technically correct to say curved space ...
1answer
78 views
### Flat poster on a wall gaining curvature over time
Assuming you have a flat poster with no curvature, why is it that when you pin it to the wall (with thumbtacks) it gains curvature as seen in the picture below. When I put the poster up it was ...
2answers
254 views
### Equation of the saddle-like surface with constant negative curvature?
What is the equation for the saddle-like 2d surface (embeded in 3d Euclidean space with cartesian coordinates x, y and z) with constant negative curvature frequently used to illustrate open universe ...
1answer
202 views
### What bends fabric of space-time?
I know that mass can bend fabric of space-time, which causes gravity by making an object curve around a planet or star but is there anything else that can bend it? Other energy sources, forces ...
1answer
327 views
### $\pi$ and the Curvature of Space
If one draws a circle on a sphere and measures the ratio of the diameter to the circumference, that value varies depending on the diameter of the circle compared to the diameter of the sphere it is ...
1answer
148 views
### What's the difference between the equivalence principle and curvature of spacetime?
Calculating using the equivalence principle only accounts for half the deflection of light, whereas the other half is from curvature of space-time. But isn't the equivalence principle the same thing ...
2answers
140 views
### Galilean transformations and Frenet Frame
How I can prove that the curvature and torsion of a curve are invariant under the Galilean transformations? In my physics book a hint is the isometries of Galilean transformations, but it's still ...
1answer
145 views
### Curved lines in a picture (Photography)
My problem is when I take a picture (a close one) the straight edge looks a little curved. In a standard camera, like a CyberShot. I would like to know if there is some relationship between the ...
0answers
275 views
### de Sitter and anti de Sitter metric
Is the following correct for the distance $d$ from the origin $(0,0)$ to point $(t,x)$ in the 2-dimensional de-Sitter and anti de-Sitter spaces? Here, $t$ is time and the distance may be called the ...
1answer
280 views
### Superposition of Ricci scalars [closed]
Suppose I have two point/line singularities in spacetime (what is important to me is that they are localized). Also suppose I have some fields in spacetime and that the two singularities interact with ...
0answers
166 views
### A question about surface tension of membranes and their curvature
I'm reading a review about membranes properties and I have reach a section about fluid membranes. The section discuss the principal curvatures ($c_1, c_2$) and the spontaneous curvatures ($c_0$). ...
3answers
154 views
### Why geometrically four acceleration is a curvature vector of a world line? And what is proper acceleration?
Why geometrically four acceleration is a curvature vector of a world line? Geometrically, four-acceleration is a curvature vector of a world line. Therefore, the magnitude of the ...
1answer
66 views
### Is there a formula to work out how much the fabric of spacetime bends?
From my knowledge, a big mass (planet star etc) can bend the fabric of spacetime. Is there a formula that we can use to work out how much it bends?
1answer
189 views
### In what way is the Riemann curvature tensor related to 'radius of curvature'?
In Misner, Thorne & Wheeler, they say, in their delightful 'word equations' that \left(\frac{\mathrm{radius\,\, of \,\,curvature}}{\mathrm{of\,\, spacetime}}\right) = ...
2answers
155 views
### What is the Riemann curvature tensor contracted with the metric tensor?
Can the Ricci curvature tensor be obtained by a 'double contraction' of the Riemann curvature tensor? For example $R_{\mu\nu}=g^{\sigma\rho}R_{\sigma\mu\rho\nu}$.
1answer
103 views
### What is the curvature scalar $\Psi_{4}$?
What is the curvature scalar $\Psi_{4}$? Is it related to the scalar curvature $R$? What does its real and imaginary parts represent?
1answer
77 views
### Ricci scalars for space and spacetime, local and global curvature
If Ricci scalar describes the full spacetime curvature, then what do we mean by $k=0,+1,-1$ being flat, positive and negative curved space? Is $k$ special version of a constant "3d-Ricci" scalar? ...
2answers
190 views
### How to concile flat spacetime and big bang?
After reading How do we resolve a flat spacetime and the cosmological principle? I still remain perplex. Please excuse my ignorance and try explaining to me : I thought that basically, when we ...
1answer
431 views
### Is the curvature of space around mass independent of gravity?
Is the curvature of space caused by the local density of the energy in that area?Could gravity be a separate phenomenon only arising from the curvature of space? For instance if the density of energy ...
2answers
89 views
### Ricci tensor for a 3-sphere without Math packets
Let's have the metric for a 3-sphere: $$dl^{2} = R^{2}\left(d\psi ^{2} + sin^{2}(\psi )(d \theta ^{2} + sin^{2}(\theta ) d \varphi^{2})\right).$$ I tried to calculate Riemann or Ricci tensor's ...
0answers
97 views
### How to calculate Riemann and Ricci tensors for a sphere? [closed]
Let's have the metric for a sphere: $$dl^{2} = R^{2}\left(d\psi ^{2} + sin^{2}(\psi )(d \theta ^{2} + sin^{2}(\theta ) d \varphi^{2})\right).$$ I tried to calculate Riemann or Ricci tensor's ...
0answers
25 views
### How to prove the derive the expression for space part of Riemann tensor for homogeneous and isotropic space-time?
It's not a homework!! For spheric, hyperbolic and flat case $$dl^{2} = R^{2}\left(d \psi^{2} + sin^{2}(\psi )(d \theta^{2} + sin^{2}(\theta )d \varphi^{2})\right),$$ dl^{2} = R^{2}\left(d ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9220690727233887, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/4199/does-bunching-reduce-synchrotron-radiation
|
# Does bunching reduce synchrotron radiation?
A continuous charge distribution flowing as a constant current in a closed loop doesn't radiate. Is it therefore true that as you increase the number of proton bunches in the LHC, while keeping the total charge constant, the synchrotron radiation decreases?
-
Please somebody correct the question. The LHC has circulating protons. Either change "the LHC" to "an accelerator" or the "electron" to "proton". – anna v Jan 30 '11 at 15:41
By the way, a remark to the title: it is actually debunching not bunching that reduces the SR. – Igor Ivanov Jan 30 '11 at 18:47
@Anna, I've made the requested edit. – John McVirgo Jan 30 '11 at 18:50
@Igor i'm interested in whether it's possible to collide charged particles together, while reducing synchrotron radiation. You would still need bunched charged particles to do this, correct? – John McVirgo Jan 30 '11 at 19:03
1
@John — just to make sure: the term "bunching" means "the act of grouping particles in bunches", while "debunching" means "spreading out initially bunched particles into a more homogeneous distribution". You seem to be using "bunching" as an equivalent of "the number of bunches", which is not the correct usage. – Igor Ivanov Jan 30 '11 at 22:32
show 2 more comments
## 3 Answers
Synchrotron radiation can be coherent and incoherent. Coherent SR arises when electrons are grouped into short bunches so that the entire bunch emits SR as a whole. Quantum mechanically, in coherent SR the photon emission from different electrons in a bunch sum up at the amplitudes level and constructively interfere. In the incoherent SR they sum up at the level of intensity, and there is no interference.
Incoherent SR does not care how electrons are distributed along the ring, while the coherent SR is obviously boosted up in the presence of strong bunching. So, the more homogeneously you distribute the electrons, the less the effect of coherent SR will be and the less overall SR you'll have.
Now let's look at the incoherent SR. Theoretically, you are right: if we managed to create the absolute homogeneous charge distribution along the ring, we would (classically) have no SR at all because charge distribution does not change in time. The point is that this is not feasible experimentally, at least for the accelerators and the beams we have. That would require putting electrons in a well-defined quantum state of the radial motion and a well-defined angular quantum number m for the azimuthal dependence, and the accelerator technology is very far from that.
However, there is another thing which mimics that situation closely. People have managed recently to put freely propagating electrons in states with well-defined orbital angular momentum (m as high as 75), see this paper in Science for details, and they really see the annular distribution for the electron density. For such a state there exists a reference frame where the electron does not move along the z axis but just rotated as a whole in the transverse plane (with some radial distribution) around the symmetry axis. This rotation is not driven by any force, it's just the peculiar superposition of plane waves that creates this steady pattern. So in this case you can say that the electron indeed circulates but does not emit any SR.
-
Ahh, what a relief...:=) Otherwise I wouldn'have slept coming nights. – Georg Jan 30 '11 at 16:05
I'll look further into the points you've made, thanks. – John McVirgo Jan 30 '11 at 20:41
What a wonderful answer! – Carl Brannen Jan 31 '11 at 0:44
Dear John, a good question. You may want to read a relevant paper about the closely related question for the late SSC collider:
http://mafurman.lbl.gov/SSC-N-143.pdf
Bunch-Length Dependence of Power Loss for the SSC
The beam has $M$ bunches in the orbit. Each of them carries $Ne$ of electric charge. All of the particles orbit by frequency $f_0$ (revolutions per second, in Hertz). We define the product, the bunch current, to be $I_b=N e f_0$.
In equation 12, you will see the result: $$Power = 1.101 Z_0 M I_b^2 \sigma_{\phi}^{-4/3}$$ Here, $Z_0$ is just the impedance of the vacuum, $4\pi/c = 377 \Omega$; they use some Gaussian units.
More importantly, $\sigma_\phi$ is just $\sigma_z/\rho$, the angular root mean square size of the bunch. You see that it's the only quantity whose increase makes the power decrease. If you spread the bunches around the ring, you're getting closer to your "closed loop current" that doesn't radiate, indeed. In practice, you don't want to spread the bunches completely because you wouldn't know the timing of the collisions. In real applications, $\sigma_\phi$ is much smaller than one, giving you a significant increase to the synchrotron radiation.
The formula is simply proportional to the number of bunches. If they're separated, each of them loses the same energy per revolution. Without a loss of generality, you may imagine that we only consider one bunch, $M=1$.
In that case, for a fixed $f_0$ - which is given by the size of the tunnel and the speed of light, assuming that the particles are near the speed of light - the power radiated by the bunch is actually proportional to $N^2$. If you double the number of charged particles in the bunch, the synchrotron radiation quadruples!
That's because the energy density (and flux) is proportional to the squared electric (and magnetic) fields, and those - derived from the Liénard-Wiechert potentials - are linear in the charges (and currents) that produce the electromagnetic fields.
So once again, the power that is radiated is not proportional to the "density" of protons in the bunch but to its square! In this sense, bunching makes the synchrotron radiation worse, not better.
However, you shouldn't think that it is a catastrophe. In the designed conditions for the LHC, one proton only loses something like 6.7 keV of energy per revolution which is a billionth of those 7 TeV they ultimately want to get (in 2011, they decided to continue at 3.5 TeV). Why is it so small for the hadron colliders?
Well, for the lepton colliders, you lose a lot because the synchrotron radiation is proportional to $\gamma^6$ and the Lorentz factor $\gamma$ has to be 2,000 times higher for electrons than for protons to achieve the same energy; see the derivation. Take the sixth power of that to see the impact of the light particles.
For the hadron colliders, the main limitation is of course the magnetic field you need to keep the protons on their circular orbit. That's why you need to have all the superconducting magnets. For colliders with light particles that need a huge $\gamma$, the synchrotron radiation is very important. That's also why linear accelerators are often preferred for the leptons. Well, you won't get rid of the full synchrotron radiation because you still need to accelerate the leptons to have some fun - so there will still be a component of the acceleration in the direction of the velocity even though the straight tunnel may liberate you from the "centripetal" acceleration transverse to the velocity.
To return to the closed loop, yes, I do think that you would turn the synchrotron radiation from the circular motion off completely if you distributed the bunches uniformly - even for leptons. It would be just like a wire with a current. However, there would still be a synchrotron radiation from the acceleration in the forward direction that you need to accelerate the particles to high speeds in the first place.
-
""yes, I do think that you would turn the synchrotron radiation from the circular motion off completely if you distributed the bunches uniformly "" Hello Lubos, wasn't one of the reasons against Sommerfeld/Bohrs atom models that the electron would radiate its energy and "fall" into the core? – Georg Jan 30 '11 at 12:25
Mhhmm, that multitude of electrons circulating in a synchrotron is something different. It seems that they cancel each others radiatiion. I am irritated :=( – Georg Jan 30 '11 at 13:53
Dear Georg, the whole point of the Bohr-Sommerfeld atom was that it required the "number of de Broglie waves" around the orbit to be integer, so the -13.6 eV state was the lowest possible orbit. The model was never consistent with other properties of the electron, of course, but if one assumed that the electrons can only orbit along closed path with the quantized number of waves, then the electron couldn't fall to the nucleus. It was the whole point of the atom that they wanted to fix this "collapse" problem of the classical atom - one that had no quantization. – Luboš Motl Jan 30 '11 at 18:05
Lubos, I thought of the times when a "planetary-like" atom was proposed, but Broglies "waves" were not known yet. – Georg Jan 30 '11 at 20:28
Great informative answer as usual, Lubos. I was asking for the case of keeping the total charge in the ring constant while increasing the bunching. Therefore, from equation (12), N and therefore Ib is inversely propotional to M giving the power loss inversely proportional to the square of M. So yes, for a constant total charge in the ring, the powerloss does decrease dramatically with increased bunching, at least in theory. – John McVirgo Jan 30 '11 at 20:32
show 4 more comments
I am afraid that the radiation cannot cancel everywhere so it is better to say the radiation does not occur in case of a constant current. This is so because, according to Maxwell equations, it is not acceleration of a single charge that creates the radiation but the current time-dependence at a given point. In other words, different sources radiate differently and it is not reduced to the sum of radiations. The total filed is determined differently: superposition of fields is not a sum of radiations! The same is valid in the opposite case of short bunches where the radiative losses are proportional to the charge squared ( = source-dependent phenomenon).
-
Dear downvoters, leave short explanation or a disagreement statement, please! – Vladimir Kalitvianski Jan 30 '11 at 18:13
– John McVirgo Jan 30 '11 at 20:48
I did not get your suggestion. Do you mean that radiation of multiple charges can cancel everywhere? Do you mean that in the whole space on can get a purely destructive interference? – Vladimir Kalitvianski Jan 30 '11 at 21:12
Yes, there are some some accelerating charge distributions where the total radiation at all points sums to zero. This goes back to 1910, if you look at the Wikipedia link: "In 1910 Paul Ehrenfest published a short paper on "Irregular electrical movements without magnetic and radiation fields" demonstrating that Maxwell’s equations allow for the existence of accelerating charge distributions which emit no radiation." – John McVirgo Jan 30 '11 at 22:39
I will look through it if it is available but why then it is not present on the textbooks? – Vladimir Kalitvianski Jan 30 '11 at 23:13
show 5 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9391328692436218, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/188207/convergence-of-a-function-in-a-metric-space-to-its-metric?answertab=active
|
# Convergence of a function in a metric space to its metric
Given a metric space $(\mathbb{A},d)$ with a metric $d$ being the Euclidean metric, if $\lim_{t \rightarrow \infty}||A_{t+1}-A_t||\rightarrow 0$ is a convergent sequence where $A$ is a matrix with the rows being the points in the metric space and there exists a function $f(.)$ acting on the rows of A that converges to the euclidean metric over the sequence as $\lim_{t \rightarrow \infty}||f_{i,j}(A_{t+1})-f_{i,j}(A_t)||\rightarrow d(A_{i.},A_{j.})$ where $i,j$ denote the rows of $A$;
am looking for some reference to study/characterize/understand this phenomenon of convergence of functions to a metric over a convergent sequence in a metric space.
-
As it stands the question is incomprehensible. Maybe you should consult someone who can help you in formulating your question properly. – Christian Blatter Aug 29 '12 at 15:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8772005438804626, "perplexity_flag": "head"}
|
http://runescape.wikia.com/wiki/Drop_rate
|
# Drop rate
Discuss12
22,492pages
on this wiki
Drop Rate is the probability that a monster is expected to yield a certain item when killed once by a player. When calculating a drop rate, divide the number of times you have received the certain item, by the total number of that NPC that you have killed. For example:
• Bones have a 100% drop rate from Chickens
• Feathers have a 75% drop rate from Chickens
A common misconception is that you are guaranteed that item when you kill the NPC $x$ number of times, where $\frac{1}{x}$ is the drop rate. You are never guaranteed anything, no matter how many times you kill that monster. The drop rate is simply the probability of getting a certain drop in one kill. The probability that a monster will drop the item at least once in $x$ kills is 1 minus the probability that it will not drop that item in $x$ kills, or $1 - \left(1 - \frac{1}{y}\right)^x$, where x= number of kills, and y= drop rate.
For example, if dust devils are expected to drop a Dragon chainbody once out of 15000 kills, then the probability that a player will get at least one Dragon chainbody after 15000 kills is
$1-\left(\frac{14999}{15000}\right)^{15000}$
Which is approximately 63.21%. Similarly, we can solve for the number of Dust Devils you need to kill to have a 90% probability of getting one when you kill them:
$1-\left(\frac{14999}{15000}\right)^{x} > 0.9$
$\left(\frac{14999}{15000}\right)^{x} < 0.1$
Which yields the answer $x>34538$. There is also an equation for computing the probability of a certain amount r of a particular drop after n amount of kills:
$P(r,n)={}^{n}\textrm{C}_{r} p^{r}q^{n-r}$
And if you take the sum of this equation from when r=1 until r=n you get the probability of at least 1 drop of a particular item after n kills:
$\sum_{r=1}^{n}{}^{n}\textrm{C}_{r} p^{r}q^{n-r}=1-\left (1-\frac{1}{n} \right )^{n}$
## Estimation
Drop rates are often quite difficult to obtain, as an accurate estimation of one requires thousands of kills. Because of this, some players who wish to calculate drop rates keep a list of items that a monster drops after each kill, sometimes called a "drop log." Then they calculate the percentage by dividing the number of desired drops by the total number of kills. All monsters found on this Wiki contain a list of the items they drop. Behind those items you will often find between brackets a drop rate indication for that item. The drop rate of items has been divided into five different groups displayed below.
Rarity Drop rate^(-1) Example*
Always 1 Bones
Common 2-50
Coins
Uncommon 51-100 Rune armour
Rare 101-512 Abyssal whip
Very rare 513+ Draconic visage
* Examples are only given as indication because they depend on the monster that drops it. An item dropped by a boss monster could be a common item while it would be very rare for normal monsters.
### Confidence Intervals
This section should only be considered by people who understand algebraic manipulation and have a basic understanding of a statistical model.
It is given to us that the confidence interval for the success probability of a model $X\sim B(n,p)$ may be expressed as the formula[1]:
$C=p\pm z_{1-\frac{\alpha}{2}}\sqrt{\frac{p(1-p)}{n}}$
Where:
• $p$ - the assumed probability of success given by the ratio of successes to sample size. To clarify: if one were to gain 2 Divine Sigils after 2000 Corp kills, the assumed probability of success would be $\frac{2}{2000}=\frac{1}{1000}=0.001$
• $z_{1-\frac{\alpha}{2}}$ - this is the critical standard score such that $P(Z\leq z_{1-\frac{\alpha}{2}})\approx1-\frac{\alpha}{2}$ for $Z\sim N(0,1)$. This z-value may be found by checking with this table. Information on how to read this table may be found here.
• $\alpha$ - the confidence error you wish your interval to represent. An example value may be 0.05 (this represents 95% confidence).
• $n$ - the amount of trials you've conducted. In the example used in the definition of 'p', this value would be 2000.
To save the reader time, a list of possible z-values is supplied:
$\alpha$ Confidence level $z_{1-\frac{\alpha}{2}}$
0.2 80% 1.28
0.1 90% 1.64
0.05 95% 1.96
0.01 99% 2.57
Example of usage
Consider the following case: we have killed a combined total of 500 Black Dragons and have gained 10 Draconic Visages between us. This suggests that we take $p=\frac{10}{500}=0.02$ and $n=500$. Now let us say that we wish to create a 95% confidence interval for our p-value (this is to say that $\alpha=0.05$ and $z_{1-\frac{\alpha}{2}}=1.96$). Our confidence interval is constructed as follows:
$C_{lowerbound}=p-z_{1-\frac{\alpha}{2}}\sqrt{\frac{p(1-p)}{n}}=0.02-1.96\sqrt{\frac{0.02(1-0.02)}{500}}=0.00772846\approx\frac{1}{129}$
And...
$C_{upperbound}=p+z_{1-\frac{\alpha}{2}}\sqrt{\frac{p(1-p)}{n}}=0.02-1.96\sqrt{\frac{0.02(1-0.02)}{500}}=0.0322715\approx\frac{1}{31}$
What this means is that we can be about 95% sure that the the drop rate of Draconic Visages (from Black Dragons) is somewhere between 1 in 31 and 1 in 129.
Notes on usage
• This method of calculating confidence intervals relies on being able to approximate our binomial model as a normal distribution -- as such, most statisticians will not use this method unless $np>5$ and $n(1-p)>5$.[2]
## Trivia
If we let x be an arbitrary number and $1/x$ be the drop rate for a particular drop, the larger x gets (in other words, the rarer the drop is), the closer the probability of obtaining that item in x kills approaches $1 - \frac{1}{e}$, or approximately $0.63212$, where e is the exponential constant $\approx{2.718281828459045}$. We can express this limit as follows:
$\lim_{x \to \infty} 1 - \left(1 - \frac 1x\right)^x = 1 - \frac 1e$
This follows from the definition of $e$:
$e = \lim_{n \to \infty} \left(1 + \frac 1n\right)^n<br> =\sum_{i=0}^{\infty} \frac{1}{i!}$
This leads to the conclusion that, given a drop rate of $\frac{1}{r}$, the approximate chance of not receiving a drop after $n$ kills is $\left(\frac{1}{e}\right)^\frac{n}{r}$. Note that this is only accurate for large values $r$.
# Files
Add a File
52,796files on the wiki
• by Urbancowgurl777
2013-05-16T01:51:25Z
• by Moneybucks
2013-05-15T21:02:59Z
Used on Treasure Trails/Guide/Scans
• by Moneybucks
2013-05-15T20:59:10Z
Used on Treasure Trails/Guide/Scans
• by Moneybucks
2013-05-15T20:56:14Z
Used on Treasure Trails/Guide/Scans
• by Moneybucks
2013-05-15T20:53:24Z
Used on Treasure Trails/Guide/Scans
• by The Mol Man
2013-05-15T18:52:23Z
Used on Revenants, more...
• by Urbancowgurl777
2013-05-15T17:00:16Z
Used on Ogre chef
• by Urbancowgurl777
2013-05-15T16:25:33Z
Used on Musician/Locations, more...
• by Urbancowgurl777
2013-05-15T16:25:32Z
Used on Musician/Locations, more...
• by Urbancowgurl777
2013-05-15T16:04:09Z
Used on Child
• by Urbancowgurl777
2013-05-15T15:45:39Z
Used on Ogre Enclave, more...
• See all files
See all files >
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 41, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.898409366607666, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/118148/list
|
## Return to Answer
2 Addressed issue of rational functions rather than polynomial functions.
This would contradict the Harris-Mumford(-Eisenbud) theorem that $M_g$ is non-uniruled for $g$ at least $23$. Let $C$ be a general curve of genus $g$. If $C$ is in "Zieve form", then it is the normalization of the (almost certainly) singular curve in `$\mathbb{CP}^1 \times \mathbb{CP}^1$`, `$$D = \{ ([x_0,x_1],[y_0,y_1]) \in \mathbb{CP}^1\times \mathbb{CP}^1 \vert y_0^e f(x_0,x_1) - x_0^dg(y_0,y_1) \}, $$` where $f(x_0,x_1)$, respectively $g(y_0,y_1)$, is a homogeneous polynomial of degree $d$, resp. $e$, such that $f(0,1)$ and $g(0,1)$ are nonzero (or else the defining polynomial factors to a simpler form). By direct computation, the singular points occur where $[x_0,x_1]$ is a multiple root of $f(x_0,x_1)$ and $[y_0,y_1]$ is a multiple root of $g(y_0,y_1)$ or the point is $([0,1],[0,1])$. Moreover, at each point, the local analytic type of the singularity is the same as the plane curve with equation $y^n-x^m$, where $m$, resp. $n$, is the vanishing order of $f(x_0,x_1)$, resp. $g(y_0,y_1)$ at that point. In particular, the "delta invariant" depends only on $(m,n)$. Thus, if you "deform" $f(x_0,x_1)$ and $g(y_0,y_1)$ so that the number and type of multiple roots remains constant, then the normalizations of the corresponding curves in $\mathbb{CP}^1\times \mathbb{CP}^1$ remain of genus $g$. However, the family of such deformations of $(f,g)$ is a rational variety. Precisely, if you write `$$ f(x_0,x_1) = (x_1-a_1x_0)^{m_1}(x_1-a_2x_0)^{m_2}\cdots (x_1-a_rx_0)^{m_r}, $$` with $(a_1,\dots,a_r)$ pairwise distinct, then the deformation space for $f$ is just a Zariski open subset of the affine space with coordinates $(a_1,\dots,a_r)$, and similarly for $g(x_0,x_1)$. Since $M_g$ is non-uniruled, this is a contradiction: there is only the constant morphism from a rational variety to $M_g$ whose image contains the general point parameterizing $C$.
Edit. Mike also asks whether this could be true if $f$ and $g$ are rational functions rather than polynomial functions. This is equivalent to replacing the defining equation above in $\mathbb{CP}^1 \times \mathbb{CP}^1$ by the more general equation ```$$
g_0(y_0,y_1)f_1(x_0,x_1) - f_0(x_0,x_1)g_1(y_0,y_1),
$$``` where $f_0$, $f_1$ are homogeneous of degree $d$ with no common factor, and where $g_0$, $g_1$ are homogeneous of degree $d$ with no common factor. The same observations apply: the number and types of singularities depend only on the number and multiplicities of the roots of $f_0$, $f_1$, $g_0$ and $g_1$. By varying those (distinct, likely repeated) roots as in the previous paragraph, one gets a morphism from a rational, quasi-projective variety to $M_g$. By Harris-Mumford(-Eisenbud), the only such morphism is constant if the image contains a general point of $M_g$.
1 [made Community Wiki]
This would contradict the Harris-Mumford(-Eisenbud) theorem that $M_g$ is non-uniruled for $g$ at least $23$. Let $C$ be a general curve of genus $g$. If $C$ is in "Zieve form", then it is the normalization of the (almost certainly) singular curve in `$\mathbb{CP}^1 \times \mathbb{CP}^1$`, `$$D = \{ ([x_0,x_1],[y_0,y_1]) \in \mathbb{CP}^1\times \mathbb{CP}^1 \vert y_0^e f(x_0,x_1) - x_0^dg(y_0,y_1) \}, $$` where $f(x_0,x_1)$, respectively $g(y_0,y_1)$, is a homogeneous polynomial of degree $d$, resp. $e$, such that $f(0,1)$ and $g(0,1)$ are nonzero (or else the defining polynomial factors to a simpler form). By direct computation, the singular points occur where $[x_0,x_1]$ is a multiple root of $f(x_0,x_1)$ and $[y_0,y_1]$ is a multiple root of $g(y_0,y_1)$ or the point is $([0,1],[0,1])$. Moreover, at each point, the local analytic type of the singularity is the same as the plane curve with equation $y^n-x^m$, where $m$, resp. $n$, is the vanishing order of $f(x_0,x_1)$, resp. $g(y_0,y_1)$ at that point. In particular, the "delta invariant" depends only on $(m,n)$. Thus, if you "deform" $f(x_0,x_1)$ and $g(y_0,y_1)$ so that the number and type of multiple roots remains constant, then the normalizations of the corresponding curves in $\mathbb{CP}^1\times \mathbb{CP}^1$ remain of genus $g$. However, the family of such deformations of $(f,g)$ is a rational variety. Precisely, if you write `$$ f(x_0,x_1) = (x_1-a_1x_0)^{m_1}(x_1-a_2x_0)^{m_2}\cdots (x_1-a_rx_0)^{m_r}, $$` with $(a_1,\dots,a_r)$ pairwise distinct, then the deformation space for $f$ is just a Zariski open subset of the affine space with coordinates $(a_1,\dots,a_r)$, and similarly for $g(x_0,x_1)$. Since $M_g$ is non-uniruled, this is a contradiction: there is only the constant morphism from a rational variety to $M_g$ whose image contains the general point parameterizing $C$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 85, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.928650438785553, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/68132/fundamental-groups-of-surfaces/68133
|
## fundamental groups of surfaces
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
What are the properties that hold for a fundamental group of a surface and does not hold necessary for the fundamental groups of manifolds of higer dimensions ??
-
9
"fundamental groups of manifolds of higer dimensions": every finitely presentable group is the fundamental group of some manifold of dim $\ge 4$. – André Henriques Jun 18 2011 at 10:39
1
In some sense, André's comment is the answer (see also Jim's answer below): closed 4-dimensional manifolds are "as bad" as finitely presented groups: so their fundamental group can be non-linear, non-residually finite, non-hopfian,etc...; it can have Kazhdan's property (T), it can have all $L^2$-Betti numbers vanishing, etc... I share Henri's opinion that the OP is vague. – Alain Valette Jun 19 2011 at 5:17
## 6 Answers
This question is very vague, but here are some thoughts to add to Mark's answer.
First, note that any finitely presented group arises as the fundamental group of a closed manifold of dimension 4 (see this MO question), which is a huge contrast to the very special case of dimension 2.
The properties of the fundamental groups of 3-manifolds are a subject of very active research, much aided by Perelman's solution to the Geometrisation Conjecture. Like the 2-dimensional case, 3-manifold groups are residually finite (a theorem of Hempel). The fact that there is no closed 3-manifold with every infinite-index subgroup free is only very recent known, as a result of work of Kahn and Markovic.
I don't think any closed 3-manifold has cohomological dimension 2, so that property actually does it on its own.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
It's a conjecture that surface groups are characterized by being the only 1-relator groups such that every finite-index subgroup is also 1-relator and every infinite index subgroup is free.
-
2
Neat. Whose conjecture is that? – Ryan Budney Jun 19 2011 at 2:40
oops, I forgot a condition. Ben Fine: sci.ccny.cuny.edu/~shpil/gworld/problems/… – Agol Jun 19 2011 at 4:33
1
FYI, I recently proved this for 'cyclically pinched' one-relator groups: arxiv.org/abs/1102.2866 . – HW Jun 19 2011 at 6:33
Hey Henry, I haven't thought about this much - why is the condition on infinite index subgroups being free necessary? Is there a (residually finite) 1-relator group with finite-index subgroups 1-relator, but non-free infinite-index subgroups? – Agol Jun 19 2011 at 23:34
2
Ian - The Baumslag--Solitar group BS(1,m) has this property for any m. In fact, this question goes back to Melnikov in the Kourovka notebook (though BS(1,m) is a counterexample to Melnikov's original question). – HW Jun 20 2011 at 6:01
Every subgroup of infinite index is free, the group is residually finite, and the cohomological dimension is 2.
-
how do you prove thay every subgroup of infinite index is free ? – unkown Jun 18 2011 at 9:57
1
It is well known: A. Hoare, A. Karrass and D. Solitar, Subgroups of infinite index in Fuchsiangroups, Math. Z. , 125, 1972, 59–69 – Mark Sapir Jun 18 2011 at 10:11
1
unknown - consider the corresponding covering space, and convince yourself that it deformation retracts onto a graph. – HW Jun 18 2011 at 10:38
2
@HW: your statement is actually a nontrivial theorem of Whitehead, so you might be overestimating the OP's mathematical prowess. – Igor Rivin Jun 19 2011 at 0:39
Igor - I suppose I really had the finitely generated case in mind. – HW Jun 19 2011 at 6:35
The word problem for the fundamental group of a closed surface is solvable, using Dehn's algorithm. Since any finitely presented group appears as the fundamental group of some closed $4$-manifold, and there are such groups for which the word problem is unsolvable, this is indeed a special property for two dimensions.
-
2
Not really special to two dimensions, since also true in three dimensions. – Igor Rivin Jun 19 2011 at 2:18
@Igor: I didn't interpret the question that way. I interpreted it to mean, what's true in two dimensions that's not true in general. – Jim Conant Jun 19 2011 at 12:22
2
@Jim: I personally think the question is rather poor, so any answer makes more sense than the question itself. To increase the silliness slightly: "what is true of free groups which is not true of general groups"? – Igor Rivin Jun 20 2011 at 0:49
A surface group is either virtually abelian, or word hyperbolic (or both, when it is finite).
In some sense, this reflects the fact that every surface admits a Riemannian metric of constant curvature, and that the sign of the curvature is detected by the fundamental group.
In dimension 3, Perelman's uniformization implies that compact manifolds can be decomposed into "geometric pieces" (that are again detected in a suitable sense by their fundamental groups), while in higher dimension there is no hope for a simple result of this type.
-
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.922322154045105, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/288739/uniform-integrability-for-a-single-random-variable
|
# Uniform integrability for a single random variable
Let $X$ be a random variable. Are the following three equivalent?
• $X \in L^1$, i.e. $E |X| < \infty$.
• $X$ is uniformly integrable. That is, if given $\epsilon>0$, there exists $K\in[0,\infty)$ such that $E(|X|I_{|X|\geq K})\le\epsilon$, where $I_{|X|\geq K}$ is the indicator function$I_{|X|\geq K} = \begin{cases} 1 &\text{if } |X|\geq K, \\ 0 &\text{if } |X| < K. \end{cases}$
• For every $\epsilon > 0$ there exists $\delta > 0$ such that, for every measurable $A$ such that $\mathrm P(A)\leqslant \delta$, $\mathrm E(|X|:A)\leqslant\epsilon$.
-
What are your thought at least about the first two? – Ilya Jan 28 at 7:33
@Ilya: the first implies the second, proven by contradiction? – Ethan Jan 28 at 7:38
You can prove it directly by using the Dominated Convergence theorem. – Stefan Hansen Jan 28 at 7:42
@Ilya: I was wrong in my first comment. I forgot that the measure is a probability measure here. Is uniform integrable only defined for probabilty space? – Ethan Jan 28 at 7:42
@StefanHansen: Can you elaborate? Thanks! – Ethan Jan 28 at 7:43
show 2 more comments
## 1 Answer
Here's a sketch showing the equivalence of the last two statements: Let $(\Omega,\mathcal{F},P)$ denote the probability space we are working on.
$2)\Rightarrow 3)$: For any $A\in\mathcal{F}$ and $K>0$ we have
$$E[|X|:A]\leq E[|X|: |X|\geq K]+KP(A).$$ Let $\varepsilon>0$ be given, and pick $K>0$ (given by the assumption) such that $$E[|X|:|X|\geq K]\leq \frac{\varepsilon}{2}$$ and pick $\delta=\frac{\varepsilon}{2K}$. Conclude.
$3)\Rightarrow 2)$: Use Markov's inequality and the assumption to conclude that $$P(|X|\geq K)\to 0\quad \text{for }K\to\infty.$$ Let $\varepsilon >0$ be given. Pick a $K>0$ such that $P(|X|\geq K)\leq \delta$. Conclude.
-
Is the first one equivalent to the second or third? – Ethan Jan 28 at 8:05
It's equivalent to both (this answer shows that the second and third are equivalent). But it is easiest to show that it's equivalent to the second, which you started doing in the comments. – Stefan Hansen Jan 28 at 8:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.920446515083313, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/45567/conformal-mapping-of-c-d-onto-c-1-1
|
Conformal mapping of C \ D* onto C \ (-1, 1) [closed]
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Which is the concrete formula for the conformal mapping (normalized at infinity), acting from $\mathbb C \backslash D^*$ onto
$\mathbb C\backslash[-1, 1]$?
Here $\mathbb C$ denotes the set of all complex numbers and $D^*$ denotes the closed unit disk of the complex plane.
Also, I would be interested in references containing many examples of such of conformal mappings, by replacing the interval $[-1, 1]$ with other various subsets of the complex plane.
Thanks a lot.
-
5
You can find this information in almost any complex analysis book. – S. Carnahan♦ Nov 10 2010 at 16:54
In more detail: compose the Koebe function planetmath.org/encyclopedia/KoebeFunction.html with an appropriate Moebius transformation – Yemon Choi Nov 10 2010 at 18:15
See also en.wikipedia.org/wiki/Joukowsky_transform – S. Carnahan♦ Nov 13 2010 at 12:36
1 Answer
Finally I found that the conformal mapping is given by the formula f(z)=(1/2)(z + 1/z). Thanks any whay.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8297057151794434, "perplexity_flag": "middle"}
|
http://quant.stackexchange.com/questions/3114/risk-parity-portfolio-construction?answertab=active
|
Risk Parity portfolio construction
If I would like to construct a fully invested long only portfolio with two asset classes (Bonds $B$ and Stocks $S$) based on the concept of 'risk parity' the weights $W$ of my portfolio would be the following:
Then the weight of the bonds $W_B = \textrm{Vol}(S)/[\textrm{Vol(S)}+\textrm{Vol(B)}]$ and the weights of the stocks $W_S = 1 - W_B$.
Based on this I am going to overweigh the low volatility asset and underweight the high volatility asset. My question is: How to calculate the weights for a portfolio with multiple asset classes, 5 for example, so that each asset class will have the same volatility and contribute the same amount of risk into my portfolio. From historical data I can extract the volatility of each asset class and the correlation between them.
-
– Bob Jansen Mar 22 '12 at 21:35
You can offset some of the "diversification" (it's diversification only if the numbers hold during high stress periods) by raising the leverage on the low volatility assets. – bill_080 Mar 22 '12 at 22:24
@Bootvis: I don't think it's OT. But the formatting could certainly be improved. But the subject is non-trivial. – SRKX♦ Mar 23 '12 at 7:10
It certainly is an interesting topic but the question, as it is now, does not seem to be written by a professional quant. I would edit the question if I had the time. – Bob Jansen Mar 23 '12 at 8:05
1 Answer
Risk Parity is not about "having the same volatility", it is about having each asset contributing in the same way to the portfolio overall volatility.
The volatility of the portfolio is defined as:
$$\sigma(w)=\sqrt{w' \Sigma w}$$
The risk contribution of asset $i$ is computed as follows:
$$\sigma_i(w)= w_i \times \partial_{w_i} \sigma(w)$$
You can then show that:
$$\sigma(w)=\sum_{i=1}^n \sigma_i(w)$$
The vector of the marginal contributions ($\partial_{w_i} \sigma(w)$) is computed as follows:
$$c(w)= \frac{\Sigma w}{\sqrt{w' \Sigma w}}$$
You can then find the solution by running the following optimization:
$$\underset{w}{\arg \min} \sum_{i=1}^N [\frac{\sqrt{w^T \Sigma w}}{N} - c(w)_i]^2$$ This article contains all the developments you require to understand how the formulas above are derived.
-
Can you explain what techniques are needed to run that optimization? – nxstock-trader Mar 28 '12 at 22:09
1
– SRKX♦ Mar 28 '12 at 22:13
I meant what packages/routines to use if I were doing this in R? – nxstock-trader Mar 29 '12 at 1:01
@nxstock-trader: you should be able to find something on this page. I haven't used R for optimization for a long time. You can ask on Mathematics or Stack Overflow as well. – SRKX♦ Mar 29 '12 at 6:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9362728595733643, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/14586?sort=votes
|
## When can a function be recovered from a distribution?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
What properties does a distribution (in the generalized function sense) has to have in order to be a function. That is, when is $T(\varphi) = \int f \varphi$ for some $f$?
-
2
en.wikipedia.org/wiki/Radon–Nikodym_theorem – Ryan Budney Feb 8 2010 at 2:35
Ryan: I think a bit more detail might be needed to show how one leverages the RN-theorem for measures to get an analogous result for distributions. (Also, link error.) – Yemon Choi Feb 8 2010 at 2:43
## 3 Answers
First of all, $T$ must have order zero, i.e., $|T(\varphi)|\le C(K)\sup|\varphi|$ for any test function $\varphi$ supported on a compact set $K$. By Riesz representation theorem, $T$ is a measure. To be a locally integrable function, it must be absolutely continuous with respect to the Lebesgue measure. One way to express this condition: $C(K)\to 0$ as the Lebesgue measure of $K$ tends to zero, which $K$ staying within a fixed compact set.
-
Do you have any references where I could learn the details of this? Thank you very much! – commonname Feb 8 2010 at 2:55
1
Almost any introduction to distribution theory will contain the required ingredients for this argument. Personally, I learned it first from Rudin's functional analysis book. – Harald Hanche-Olsen Feb 8 2010 at 3:14
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I haven't thought about this carefully enough, but it seems that there is some ambiguity in your question about what the integral $\int f\varphi$ is supposed to mean. As Ryan and Leonid have said: if you want the representing function $f$ to be locally integrable then the Radon-Nikodym theorem is what you need.
On the other hand, if you allow principal-value integrals (which is probably not what you want, I'm guessing, but I wasn't sure from your question) then I think
$$\varphi \mapsto \int_{\rm p.v.} \frac{\varphi(t)}{t}\ dt$$
would be a tempered distribution that is in some sense `represented by a function', even though the function is not everywhere locally integrable.
-
Assuming that the question is to be understood in the sense of when a distribution is represented by a locally integrable function, here is a characterisation which is perhaps more applicable than the solution already given: for each compact $K$ and each sequence $(\phi_n)$ of test functions with support in $K$ which are uniformly bounded and converge in the $L^1$-norm to zero, $T(\phi_n) \to 0$. This is because there is a nice, complete topology on $L^\infty(K)$ for which the test functions are dense, the dual is $L^1$ and the convergence is as above. There are several explicit descriptions of this topology---as a strict topoogy, as a mixed topology or as the Mackey topology for the duality $(L^\infty,L^1)$ (see the book "Saks Spaces and Applications to Functional Analysis").
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9426031708717346, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/tagged/induction+functions
|
Tagged Questions
4answers
287 views
$f: \mathbb{R} \to \mathbb{R}$ satisfies $(x-2)f(x)-(x+1)f(x-1) = 3$. Evaluate $f(2013)$, given that $f(2)=5$
The function $f : \mathbb{R} \to \mathbb{R}$ satisfies $(x-2)f(x)-(x+1)f(x-1) = 3$. Evaluate $f(2013)$, given that $f(2) = 5$.
5answers
465 views
IMO 1987 - function
Show that there is no function $f: \mathbb{N} \to \mathbb{N}$ such that $f(f(n))=n+1987, \ \forall n \in \mathbb{N}$.
2answers
86 views
Tricky well defined function and induction
Lets define a function $f$ such that $\Bbb N \times\Bbb N \to\Bbb N$. It takes two natural numbers as inputs and also outputs a natural number. Let $f$ have the following properties \$f(a,b) = ...
2answers
75 views
how to prove ${{a}_{0}}+{{a}_{1}}x+{{a}_{2}}{{x}^{2}}+\cdots +{{a}_{n}}{{x}^{n}}=0$ has at least one real root in $(0,1)$. [duplicate]
Possible Duplicate: Prove existence of a real root. If $a_0$+$\frac{a_1}{2}$+$\frac{a_2}{3}+\ldots+\frac{a_n}{n+1}=0$, how to prove \${{a}_{0}}+{{a}_{1}}x+{{a}_{2}}{{x}^{2}}+\cdots ...
3answers
78 views
Induction on functions
I'm working through some homework on induction, and most problems I can solve fine, but I have problem getting started on induction proofs that ask you to prove function relations. For example, here ...
0answers
68 views
Using induction to prove $f(n)=2n-3$ if $n\lt4$, $f(n)=2n-4$ if $n\ge4$
I have a problem, for which the solution(by looking at the pattern) I found is $$f(n)=\begin{cases}2n-3,\text{if }n<4\\2n-4,\text{if }n\ge 4\;.\end{cases}$$ I want to prove it inductively, I'm ...
3answers
344 views
Question about a recursively defined function
Problem. Let $(f_n)_{n=1}^\infty$ be a sequence of functions $f_n\colon [-1,\infty)^n\to\mathbb{R}$ that are recursively defined in the following way: $$f_1(x_1)=1+x_1,$$ f_n(x_1,\ldots,x_n) = ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8806408047676086, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/71071/easiest-shortest-proof-of-the-following-theorem
|
# Easiest/shortest proof of the following theorem
If $f$ is a rational function defined on the complex plane. Then the number of the zeros is equal to the number of the poles (counting multiplicity) and considering points at infinity.
I can imagine a proof using the argument principle. Is this the simplest route to the theorem above? Is there a name for the theorem above?
-
3
Isn't this clear just from factoring the numerator and denominator then cancelling any common factors? The number of zeros and poles is then visible (being careful about those at infinity) – mt_ Oct 9 '11 at 11:08
Just so I can clarify what your saying: $f(z)=\frac{(z-1)}{(z-2)(z-3)}$ has a zero at 1 and a pole at 2,3 each of multiplicity 1. How can I work out the multiplicity of the zero at infinity without invoking the theorem mentioned above? – Jam Baxter Oct 9 '11 at 11:34
2
A factor $z-\alpha$ has a zero of order $1$ at $\alpha$ and a pole of order $1$ at $\infty$. – Christian Blatter Oct 9 '11 at 11:54
Ok thanks guys. Really helped. – Jam Baxter Oct 9 '11 at 12:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.887667179107666, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/208700-conditional-second-order-derivative-degree-homogeneity.html
|
# Thread:
1. ## Conditional Second-order Derivative and Degree of Homogeneity
I have a function $f(a,b)$ with first-order derivatives $f_a>0, f_b>0$ and second-order derivatives $f_{aa}<0, f_{bb}<0, f_{ab}>0$.
Additionally I know that the degree of homogeneity of the function is larger than $0$ but smaller than $1$.
In an economics paper I found the statement that given these assumptions the derivative of $f_a$ with regard to $a$ is negative if the first-order derivative with regard to $b$ is held constant:
$\frac{\partial f_a}{\partial a}\mid_{f_b=C} <0$
Why is this so?
I understand that I can decompose $\frac{f_S(a,b)}{\partial a}$ into
$f_{aa} + f_{ab} \frac{\partial b}{\partial a}$
Now, I would have to show that $f_{aa} < f_{ab} \frac{\partial b}{\partial a}$ but I don't know how to do that using $f_b=C$ and the information about the degree of homogeneity.
2. ## Re: Conditional Second-order Derivative and Degree of Homogeneity
Just a small correction. It should read:
I know that I can decompose $\frac{\partial f_a(a,b)}{\partial a}$ into ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9492172598838806, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/160431-calculus-3-optimization-without-constraints.html
|
# Thread:
1. ## Calculus 3 - Optimization without Constraints
The behavior of a function can be complicated near a critical point where D=0. Suppose that f(x,y)= x^3 - 3xy^2.
a.) Show that there is one critical point at (0,0) and that D=0 at that point.
b.) Show that the contour for f(x,y)=0 consists of three lines intersecting at the origin where f alternates from positive to negative. Sketch a contour diagram for f near 0.
I already did part a, but I have no idea how to even start part b.
2. Originally Posted by alyssalynnx38
The behavior of a function can be complicated near a critical point where D=0. Suppose that f(x,y)= x^3 - 3xy^2.
a.) Show that there is one critical point at (0,0) and that D=0 at that point.
firstly solve for $\nabla f = 0$
3. Like I said, I already did part a.
$fx = 3x^2 - 3y^2$
$fy = -6xy$
$fxx = 6x - 3y^2$
$fyy = -6x$
$fxy = fyx= -6y$
If you set fx and fy equal to 0, both x and y are 0 because xy = 0 and x^2 + y^2 = 0
$D = fxx*fyy - (fxy)^2$
$D = (6x - 3y^2)*(-6x) - (-6y)^2$
$D(0,0) = (6*0 - 3*0^2)*(-6*0) - (-6*0)^2 = 0$
So, there is a critical point at (0,0) and D = 0. And that's as far as I was able to get.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9577726125717163, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/35516/why-is-the-heisenberg-uncertainty-principle-stated-the-way-it-is/35551
|
# Why is the Heisenberg uncertainty principle stated the way it is?
I spent a long time being confused by the Heisenberg uncertainty principle in my quantum chemistry class.
It is frequently stated that the "position and momentum of a particle cannot be simultaneously known to arbitrary precision" (or any other observables $[A, B] \neq 0$).
This made no sense to me -- why can't you measure both of these? Is my instrument just going to stop working at a certain length scale? The Internet was of little help; Wikipedia describes it this way as well and gets into philosophical arguments on what "position" and "momentum" mean and whether they really exist (in my opinion, irrelevant nonsense that has no effect on our ability to predict things).
Eventually it was the equation itself that gave me the most insight:
$$\sigma_x \sigma_p \geq \frac{\hbar}{2}$$
Look at that – there's two standard deviations in there! It is impossible by definition to have a standard deviation of one measurement. It requires multiple measurements to have any meaning at all.
After some probing and asking around I figured out what this really means:
Multiple repeated measurements of identically prepared systems don't give identical results. The distribution of these results is limited by that formula.
Wow! So much clearer. Thus $\hat{r}(t)$ and $\hat{p}(t)$ can be known for the same values of $t$ to as much precision as your measuring equipment will allow. But if you repeat the experiment, you won't get identical data.
Why doesn't everyone just state it that way? I feel like that would eliminate many a student's confusion. (Unless, of course, I'm still missing something – feel free to enlighten me should that be the case).
EDIT: This post was at +1. Who downvoted me? I took a while to write out my question clearly and made sure it followed the guidelines on here.
-
The uncertainty principle is often stated in the form you started with because it's the most accurate yet universal way to state it. In particular, your "version of the principle" directly contradicts the uncertainty principle and it is therefore wrong. No, $r(t)$ and $p(t)$ cannot have well-defined values at the same moment, not even in principle. It's because $r$ and $p$ are really identified with operators that satisfy $rp-pr=i\hbar$ and no actual numbers $r,p$ can obey this equation. It follows that $r(t)$ and $p(t)$ can't be equal to two particular numbers. – Luboš Motl Sep 3 '12 at 9:37
It is true that in general, repetitions of the same experiment - with the same initial state - will yield different, partly random outcomes (and quantum mechanics allows us to predict all the relevant probability distributions). But this observation isn't something that allows you to circumvent or deny the Heisenberg uncertainty principle; on the contrary, the randomness and "not strict reproducibility" of the experiments are consequences of the uncertainty principle. – Luboš Motl Sep 3 '12 at 9:40
The claim "one gets non-unique, fluctuating predictions for $x$" is the very same thing as saying that $\Delta x\gt 0$: the probabilistic distribution isn't peaked, it has a width. And the reason why $\Delta x\gt 0$ is that $\Delta p\lt \infty$ as well as $\Delta x\cdot \Delta p \geq \hbar/2$ (the latter is the uncertainty principle). One may also exchange $p$ and $x$ and say the thing in the opposite way. The Heisenberg inequality itself says that $p,x$ can't be both sharply determined at a given moment. – Luboš Motl Sep 3 '12 at 9:42
One more comment: the fact that $\Delta x\gt 0$ which means that there is an uncertainty in $x$ isn't associated with any measuring apparatus. It's the whole point of the uncertainty principle that it is a universal principle in physics that no measuring apparatus can overcome. – Luboš Motl Sep 3 '12 at 10:36
Thanks for the replies. I believe I am still somewhat confused in terms of what this means for experiment. Sorry for the really absurd example here, but: if you have a hypothetical digital "position" probe and a digital "momentum" probe and the position probe displays the number 4.5293029086467... then what will the momentum probe be displaying at the same time? I mean, it can't just stop working once the position probe shows enough digits. It's got to show some value, right? – Nick Sep 3 '12 at 15:49
show 3 more comments
## 2 Answers
Nick, Don't be surprised that this is confusing. There are a lot of concepts intermixed in the discussion of the uncertainty principle that are frequently not clearly understood and are intertwined unintentionally.
Although one often sees that these are stated in statistical terms, the standard deviation does not directly require multiple observations of a sample to understand. Traditional statistics does rely upon repeated sampling in order to develop a standard deviation, however in quantum mechanics the idea is more closely associated with properties associated with the Fourier transform.
To understand the Fourier transform one must first understand what a Fourier series is. The hyperlink will take you to a discussion about the Fourier series as it relates to sound. Starting at about minute two you see a representation of a saw-tooth like wave form. When they show you in the video how the saw-tooth like wave has many components, those components are determined by performing a Fourier transform. In many cases, they transform time series functions into frequency functions (which is directly proportional to energy) but the transform is also applicable to situations where one is transforming position into momentum.
Essentially what happens, is that if one wants to have complete certainty in the value of momentum (or energy), one must look at the entire position (or time) spectrum. In other words, a definite position, when transformed into the momentum domain, requires the entire momentum domain. If one allows a little uncertainty in the position, one does not require the entire momentum domain.
This relationship can be well defined as it relates to Fourier Transforms. This is the real source of the uncertainty principle, and does not require a statistical interpretation to understand.
-
No objection to most of this, but I would say that the observable consequences of having a particular wavefunction only become clear with the statistical analysis of a large set of measurements. That's where the statistical connection comes in. If you're just doing mathematical manipulations on the wavefunctions, then sure, no statistics need be involved. – David Zaslavsky♦ Sep 3 '12 at 22:20
Thanks David, definitely agree with the point! – Hal Swyers Sep 3 '12 at 23:12
I have given an example down to the bare basics of HUP. One particle, no statistics, in a magnetic field. Its circle can be predicted by construction with accuracy. The HUP says that that I cannot see in a detector a more accurate position than it (the HUP) allows, this means that the position I find will deviate from the predicted circle within HUP. I mean that it will be equally probable for the measured point to deviate at the edge of HUP as at the center of the interval. I wonder if anyone has done the experiment. – anna v Sep 4 '12 at 4:39
I will turn my comments into an answer.
Quantum mechanics is the underlying stratum of nature but it dominates the microcosm, of dimensions commensurable to hbar 4.135667516(91)×10^−15 eV·s . Very small numbers. The Heisenberg Uncertainty Principle (HUP) does not apply statistically, it applies on individual particles, be they elementary, molecules or phonons etc in solids, individual "particles".
We are not talking of accumulating statistics and statistical errors.Half of your misunderstanding comes because of those sigmas in the formula you found, since the sigma symbol is associated with the standard deviation.
The delta symbol is more appropriate because it has nothing to do with statistics, it is the mathematical range of the value: p +/- Delta(p). If one wants to constrain the momentum within this range, then the measurement of position is constrained by the Heisenberg uncertainty principle. So the better one knows momentum the worse one knows position, and vice versa.
Probes are macroscopic and used statistically . When we talk about quantum mechanical quantities we have to address single individual particles:
Take a single ionized molecule, put it in a magnetic field so that the circle it will make will give you the momentum p with designed accuracy delta(p). The HUP tells you that any position detector you devise for the same particle, for example an emulsion film, will only be able to give you a delta(x) for the same particle within the HUP constraint.
-
In my experience this interpretation of the uncertainty principle (as a constraint instead of a standard deviation) has been responsible for more incorrect conclusions than any other aspect of it. Yes, it works, but only because the wavefunction of a constrained particle has a standard deviation of the same order as the size of the constraint. – David Zaslavsky♦ Sep 3 '12 at 22:23
@DavidZaslavsky My point is that when you call it a standard deviation people start looking at classical statistics, which defeats the point . It is not classical statistics that defines the delta but the commutators. – anna v Sep 4 '12 at 3:00
I'm not sure I see what you mean... the $\sigma$ in the uncertainty principle is nothing more than the standard deviation of the probability distribution on eigenvalues of the relevant operator. That's a (classical, I suppose) statistical quantity. – David Zaslavsky♦ Sep 4 '12 at 3:51
@DavidZaslavsky and do you think it has a gaussian distribution, as it would if it were a real sigma? Or is it an interval of undefined values for that variable? I am afraid that this is how the thinking about underlying classical explanations of QM start. – anna v Sep 4 '12 at 4:01
In fact, my textbook (Quantum Chemistry by McQuarrie) actually used the words "standard deviation" and "normal distribution". Maybe I'm not the only one who is confused. If authoritative sources all say different things, it's no wonder nobody grasps this concept. – Nick Sep 4 '12 at 4:31
show 1 more comment
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9417077898979187, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/44457/what-is-force-how-does-a-constant-force-output-a-nonconstant-power?answertab=votes
|
What is force? How does a constant force output a nonconstant power?
For a constant force, P=Fv. I understand the mathematical derivation of this, but this seems to me, intuitively, to be nonsense. I feel that my discomfort with this comes from a fundamental misunderstanding of force and Newton's Second Law, so I'm not really looking for any mathematical explanation. So, to begin:
How is it that a constant force does not add energy to a system at a fixed rate? Consider a rocket burning a fuel at a constant rate. The chemical potential energy should be converted to kinetic energy at a constant rate, that is, 1/2mv^2 should be increase linearly. The magnitude of the velocity of the rocket would then increase at a less than linear rate, implying a nonconstant acceleration and therefore, a nonconstant force/thrust (F=ma).
If force is indeed a "push or a pull," shouldn't that constant rate of burning of fuel yield a constant "push or pull" as well? Clearly not, so I would have to think that, somehow, a given force applied to a certain object at rest would in some way be different than that a force of the same magnitude being applied to that same object in motion. In this sense, is force merely a mathematical construct? What does it tangibly mean, in physical terms? Would a given force acting upon me "feel" differently to me (in terms of tug) as I am moving at differing velocities?
Force being defined as a "push or pull," which is how it has been taught in my high school class, seems rather "handwavy," and maybe that's the issue. It's been troubling me for a couple of weeks and my teacher hasn't really been able to help, so thanks!
-
5 Answers
There's nothing wrong with any of these other answers, but for another perspective, if you have a constant force acting on an object starting with zero velocity, then it will accelerate with constant acceleration $\frac{F}{m}$, and thus, after $t$ time, will have velocity $v=\frac{F}{m}t$. This means that the kinetic energy that it has acquired will be given by $\frac{1}{2}mv^{2} = \frac{F^{2}t^{2}}{2m}$.
Since the power is the rate of energy consumption, we have:
$$P = {\dot E} = \frac{F^{2}t}{m}$$
so, it should be obvious that the power increases with time. It should also be clear that our expression for $P$ is equal to $Fv$.
-
How is it that a constant force does not add energy to a system at a fixed rate?
Because the velocity isn't constant. Think of it this way; the force is constant but the distance through which the force acts, per unit time, and thus the amount of work done by the force, is changing.
For the energy to change at a fixed rate (for the power to be constant), the work done per unit time must be constant; the force would need to decrease in inverse proportion to the speed.
As an aside, in the case of a rocket, you must also consider the energy of the exhaust products, i.e., the PE of the propellants is converted to KE of both the rocket and the expelled combustion products.
Also, since the rocket is expelling mass, the acceleration of the rocket, for a constant thrust, will not be constant
-
I understand the "quasi-mathematical" explanation that since velocity isn't constant, energy is not added at a fixed rate. What is hard for me to swallow is that the instantaneous velocity should have bearing on the amount of work the force/"push or pull" is doing at all. I mathematically get that W=Fd, but it seems to me that a constant force/"push or pull" should still change the energy of a system at a constant rate. Why should a "push or pull" be defined in terms of acceleration and not energy change; once again, what really is a force in physical, not mathematical, terms? – high schooler Nov 18 '12 at 1:49
Consider a linear electric motor pushing a mass with a constant force. In the 1st second, the motor moves through 1 meter. In the 2nd second, the motor moves through 3 meters and in the 3rd second, through 5 meters. Is it not intuitive that the motor does more work in the 2nd second than the 1st? And in the 3rd than the 2nd and so forth? – Alfred Centauri Nov 18 '12 at 2:00
The mathematics of F=ma and W=Fd yield that conclusion pretty intuitively. Physically, it should make more sense that a constant tug (force) should change the energy of a system linearly rather than the velocity. Otherwise, the amount of "tug" I would feel on myself when a constant force(=ma) changes my motion would not be constant. – high schooler Nov 18 '12 at 2:09
I don't think it should make more sense physically that the energy should change linearly; momentum yes, energy no. – Alfred Centauri Nov 18 '12 at 2:13
I'm probably not being very clear. Essentially, that a constant force can output a nonconstant power makes sense when considering them to be mere mathematical constructs. But it makes no sense to me that they should be defined as such! Momentum, to me, seems to be even more of an arbitrary construct (which happens to be usefully mathematically); what is momentum in physical terms? The most natural definition of force, I feel, should be of a constant power output. – high schooler Nov 18 '12 at 2:21
show 5 more comments
I know what you mean. "Force" is quite a strange concept. Some thoughts:
F * x = E = W (if you think in one dimension)
Force applied along a way is energy.
If you want something that you can apply over time and get energy, you are looking for power.
I would have to think that, somehow, a given force applied to a certain object at rest would in some way be different than that a force of the same magnitude being applied to that same object in motion.
That is kinda correct. The faster the object is moving, the more power is applied: `F * v = P`
Getting back to your rocket example, where the engine burns fuel at a constant rate. So, the POWER of the rocket machine is constant, `power * time = energy`. Also, `force * speed = power`. That means as the speed increases, the force that the engine applies to the rocket slows.
As you said, the rocket should gain kinetic energy linearly with the time: E_kin = enginePower * time. Since E_kin = factor*v^2, we get v^2 is proportional to the time, which in turn gives v = somefactor * sqrt(time). The speed is propotional to the square root of the time. Since force times speed should be constant, the force is proportional to the inverse of the square root of time.
In other words: Accelerating at high speeds costs more energy than at low speeds. If you push on something that is fast (you apply force), you will waste more Energy doing so because you do more way, do give a Rocket constant acceleration, you must burn more and more fuel.
"Pushing" or "pulling" for a human being is always connected to energy consumption, even when you push a resting object. (It is something with our muscles, I think.) Force isn't connected to energy consumption. When you think about pushing, you probably think rather about applying power than about applying force.
PS:
I know these kind of questions, when you think about something, and the more you think about it, the less it makes sense. And then you try to ask somebody who should know, but they don't understand your problem, and then you wonder if they are all stupid. There is also the other kind of questions where you know you are right and they are wrong, but nobody wants to hear it. It can be frustrating.
PPS:
There is some barrier between the physics and the things we experience. You take a situation translate it to a formula, do some math and translate it back. This translation process is blurry. You can write a lot about the interpretation of a theory (here, we have been interpreting classical mechanics), but you will use words for that, and words are not precise.
It is a very good thing that you try to get a non-abstract understanding of the basic physical laws. You might be on the way to become a good Physicist.
-
So in the end, would you agree that as human observers, we don't naturally "feel" forces=ma, but rather, we "feel" power more easily instead? I think you understand what I am asking and what I am feeling. My teacher did not seem to understand when I asked him, so I appreciate it! – high schooler Nov 18 '12 at 2:46
I am not sure, to be honest. – Konstantin Nov 18 '12 at 17:49
Consider objects in a constant gravitational field. That is, for any object of mass $m$, there is a constant force field $|F| = mg$ directed downwards, toward the earth. There is then an associated potential energy $U = mgy$ for a distance $y$ from the surface of the earth. Any object that moves vertically by 1 meter gains a fixed amount of energy regardless of where they started from. This is what characterizes a constant force field.
So (one thing) force tells us how potential energy changes with position. If one makes a straight vertical path from $y=0$ to $y=h$ for some height $h$, it should not yield a different potential energy than taking a very circuitous, meandering route. Each position has exactly one value for the potential energy, and that's all.
Now, consider two objects that travel from the height $y = h$ to $y=0$. The potential energy difference is $\Delta U = mgh$. Let object $A$ start from rest at $y=h$. Let $B$ have some downward velocity. Clearly, $A$ will lose energy less quickly than $B$, for it takes $A$ longer to reach the ground.
That's why velocity affects power gained or lost. Energy losses in a force field depend only on how that force changes with position. If positions are traversed more quickly, then any changes must occur more quickly.
This line of reasoning depends on the notion of fields, rather than forces from things other than fields acting on objects. Nevertheless, it is rare in physics that force explicitly depends on time (rather than depending on position, which in turn may or may not depend on time).
Finally, I urge you to think more closely about momentum, as it is a key concept in physics and more than just a handy quantity to use. Momentum is intricately tied to the concept of mass. If only velocities mattered, we would have no concept of inertial mass at all, for you could add objects velocities together blindly without regard to how much stuff there was. Mass serves to tell us that, more or less, heavier things matter more than lighter things. A heavy object moving slowly can matter just as much to a problem as a light object moving quickly. How momentum changes directly leads us to the notion of force.
-
A constant force applied to an object at has the same 'effect' (in terms of acceleration) as it has on object moving at constant velocity, but different 'effect' in terms of kinetic energy. This is because the velocity of an object is relative to some (inertial) frame of reference. An object is deemed 'at rest' or 'moving with constant velocity' when measured with respect to some reference. An object 'at rest' has a constant velocity, namely, zero. This is just a statement of Newton's 1st law.
Kinetic energy is quite different from force. Kinetic energy depends on your frame of reference.
Suppose you're traveling in a spaceship with constant velocity though space. The kinetic energy of the spaceship is constant. Now you turn on the rocket boosters, hot gases are emitted at high velocity from the back of the rocket. Power is transmitted to the spaceship and you accelerate away further into space. As long as the rocket is switched on, you will experience acceleration. It will appear as if the power of the spaceship is constantly increasing!
But wait, if the rocket is applying a constant force, I can understand the kinetic energy of the spaceship increasing, but how is it that its power is increasing and not constant? Isn't the rocket a constant power machine!?
The reason for the apparent discrepancy is that if all of the rocket’s energy is transferred to kinetic energy of the spaceship, then the spaceship will accelerate. Also, as long as the force of the rocket is in in the same direction as the spaceship's instantaneous velocity, your speed will increase and so will your power!
To help your intuition get a grasp of this, think what would happen if you suddenly noticed your spaceship was heading for a large asteroid. If you don't switch off your rocket, the power of the impact will be huge! In fact, as long as your rockets keep thrusting your spaceship in the direction of the asteroid, the power of the impact is increasing. Even if you turn off your rocket, you will still be traveling at constant velocity toward the asteroid and will be doomed. The power of the impact is 'fixed' by your speed. In fact, to reduce the impact, you have to reduce your velocity (ie: accelerate in the opposite direction). Say you put two retro-rockets on full reverse-thrust, you will experience a force away from the asteroid, even though you are still traveling towards it, until at some point, you will appear stationary with respect to the asteroid before accelerating away from it.
Now, if you want to land on the asteroid, you must adjust the power of your retro rockets to reduce your speed such that at the speed is close to zero at the instant of the impact, otherwise your shock absorbers must be good enough to absorb the extra energy!
Constant force does not add energy at a fixed rate!
You will have noticed this when you try and push-start a car. When the car is stationary, your your force has little effect on the car. Once the car starts moving though, the force you apply is easily translated into kinetic energy! Of course, this is due to the momentum of the car. Its also easier to push an empty car from standstill than a full one. By the same token, more 'brake power' is needed to stop a heavy moving car (in a given time), the faster it is traveling.
-
You made a really helpful point! The "velocity derivative" of kinetic energy is momentum, so it is greater momentum leads to greater power as we accelerate. Still, in the case of the rocket though, we are physically limited by the rate at which we can burn through our fuel. Thus, if power must then be constant, musn't force=P/v necessary drop off as we accelerate? – high schooler Nov 18 '12 at 3:38
Basically, anyway, rockets/engines/propellers/whatever when pushed to their limits are limited by a constant, maximum power, correct? The force output is still clearly nonconstant, unless I am completely still missing the point. – high schooler Nov 18 '12 at 3:42
That is correct. If the rocket/engine/propellers/whatever are providing constant power, then the force will decrease inversely with velocity (that is, component of force in the same direction as the object's velocity). – theo Nov 18 '12 at 4:07
@highschooler, it occurs to me that the issue here may be that you are conflating the concepts of thrust and power. A rocket engine, for example, may develop a constant thrust but the power is split between the rocket and the exhaust products. For example, in the reference frame of the rocket, all of the power of combustion is delivered to the exhaust products. – Alfred Centauri Nov 18 '12 at 4:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9561983346939087, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?p=3961002
|
Physics Forums
Blog Entries: 6
## Visualisation of Brian Greene's concept.
I thought it would be interesting to do a visualisation of Brian Greene's concept that "everything moves at the speed of light". For a preamble see this old thread http://www.physicsforums.com/showthread.php?t=398590
First I should warn that speed or velocity is not defined in the conventional way here. It uses a concept of 4 space dimensions comprising 3 of the normal spatial dimensions and a fourth dimension with units of cTau where Tau is the proper time of a particle. Speed is here defined as dU/dt where U = √(c^2Tau^2+x^2+y^2+z^2) and t is the conventional coordinate time. It is important to notice that this is not the conventional four velocity. (Note that all the signs are the same). For lack of a better term I will call dU/dt the G-velocity and all the dimensions other than the coordinate time collectively as G-space after Brian Greene.
For simplicity I will only consider the x spatial dimension.
For a particle moving relative to the observer, the G-velocity through through G-space is:
$$\frac{dU}{dt} = \frac{\sqrt{ (c d\tau)^2+d x^2}}{dt} = c$$
(See the green path in the attached 3D graph for a particle with a 3 velocity of 0.7071c.)
For a particle that is at rest with respect to the observer this reduces to:
$$\frac{dU}{dt} = \frac{c d\tau}{dt} = c$$
This is the idea that a stationary particle moves purely through the (proper) time dimension at the speed of light.
(See the blue path in the attached 3D graph)
For a photon the equation reduces to:
$$\frac{dU}{dt} = \frac{d x}{dt} = c$$
This is the idea that a light particle moves purely through the spatial dimensions (of 3 space) at the speed of light. Note that there is no component of the photon's path in the proper time dimension.
(See the red path in the attached 3D graph)
The full graph would require plotting in 5 dimensions, (3 spatial + coordinate time + proper time) so only the x spatial dimension is shown along with the two time dimensions in the attached chart. Although it is not very clear from the image, all the paths lie on 45 degree cone, of which a quarter is shown in the image, with the apex at the origin.
In summary, nothing can remain stationary in G space. You are either moving through 3 space or moving through the proper time dimension or a combination of both.
Attached Thumbnails
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Blog Entries: 9 Recognitions: Gold Member Science Advisor Two questions: (1) Shouldn't there be square root signs in the numerators of your formulas for dU/dt? There is one in the formula for U. (2) What happens when you change frames?
Blog Entries: 6
Quote by PeterDonis Two questions: (1) Shouldn't there be square root signs in the numerators of your formulas for dU/dt? There is one in the formula for U.
Good observation. Thanks for spotting that while I was still able to edit the post. Fixed now!
Quote by PeterDonis (2) What happens when you change frames?
Good question. Not worked that out yet, but the maths should not be too difficult. Visually, if we switched to the rest frame of the green particle, it would look like it has the position of the green particle (i.e. lie on the new cTua, t plane), the green particle path will be rotated about 45 degrees anticlockwise looking from the top and the red photon path will of course remain in the x,t plane. All the paths will still lie on an a 45 degree cone.
## Visualisation of Brian Greene's concept.
See this interactive diagram for a visualization of this idea:
http://www.adamtoons.de/physics/relativity.swf
It doesn't have the the t-axis because it is somehow redundant: As your formula shows the coordinate time is the Euclidean path integral in "space-propertime" (in natural units). I think the idea is older than Greene's though. I know it from Epstein's "Relativity Visualized"
Quote by PeterDonis (2) What happens when you change frames?
This one shows how transforms look like in "space-propertime" compared to "space-coordinate time". Just move the slider called "Observes velocity in A' frame" back and forth.
http://www.adamtoons.de/physics/twins.swf
Blog Entries: 6
Quote by A.T. See this interactive diagram for a visualization of this idea: http://www.adamtoons.de/physics/relativity.swf It doesn't have the the t-axis because it is somehow redundant: As your formula shows the coordinate time is the Euclidean path integral in "space-propertime" (in natural units). I think the idea is older than Greene's though. I know it from Epstein's "Relativity Visualized" This one shows how transforms look like in "space-propertime" compared to "space-coordinate time". Just move the slider called "Observes velocity in A' frame" back and forth. http://www.adamtoons.de/physics/twins.swf
Thanks AT for your interactive diagrams. I have seen them previously a long time ago, but I think I understand them a bit better the second time around, especially in the context of this thread. I like that you have factored gravity in and would like to discuss that more. Brief question. Is the curvature of spacetime due to gravity in your animation accurate or just a first order approximation? Would you also agree that all objects move at c through space(proper)time (as per Brian Greene) is no longer true when gravity is involved?
Quote by yuiop Thanks AT for your interactive diagrams. I have seen them previously a long time ago, but I think I understand them a bit better the second time around, especially in the context of this thread. I like that you have factored gravity in and would like to discuss that more. Brief question. Is the curvature of spacetime due to gravity in your animation accurate or just a first order approximation?
The gravity part in this is just to give a qualitative idea. It lacks the distortion of the spatial dimension. This one is about gravity.
Quote by yuiop Would you also agree that all objects move at c through space(proper)time (as per Brian Greene) is no longer true when gravity is involved?
I prefer the interpretation that the advance rate through space(proper)time is still the same, but the distances between coordinates are increased.
Blog Entries: 8 Recognitions: Gold Member Thanks to both for explaining how this idea works (i.e. that the speed of all objects through space-propertime is c). I had thought so far that it stemmed from the usual four-velocity, I now realize it is a different thing. I was wondering now about its utility. In principle, it looks to me as if in the definition of the so called Greene or rather Epstein space you combined (vectorially) X + (T – X), which gives T. If you then derive over T (divide by T), you get logically 1… So what…?
Quote by Saw I was wondering now about its utility.
I think the main reason why Greene and Epstein use the space-propertime concept in their popular-scientific books, is that it offers a much more intuitive visual model, than the pseudo Euclidean Minkowski space-time.
- In the Epstein-diagrams you see both: propertime and coordinate time directly as distances. You can visually see how speed in space, clock rates, length contraction are related. In fact you can go directly from the moving light clock to a space-propertime diagram, when you identify the vertical light movement-component with proper-time.
- The speed limit in space of c seems easier to grok, when you see how it follows from an universal constant advance rate in space-propertime.
- The relation between rest-mass, momentum and total-energy is the same as between delta proper-time, delta space and delta coordiante-time (in natural units). So once you have drawn the space-propertime diagram for an object, you can relabel the axes and visualize those relations in a geometrical way as well.
- In GR you can visualize very directly the relation between gravitational time-dialtion and the geodesic-paths of free-fallers. And why it takes infinite coordinate-time (distant observer), but finite proper-time to fall into a black hole.
Blog Entries: 8 Recognitions: Gold Member Well, I was wondering rather only about the utility of the idea that everything travels at c in this space. As to the space-proper time diagrams, I fully agree that they can be very revealing and didactic. In fact you can see in this recent thread a discussion that was based on and made possible by this sort of diagrams. You may find it curious that I started inspired by Epstein diagrams, continued with yours, to which I expressly linked, and ended up, guided by bobc2, discovering that I was aiming at a Loedel diagram. A Loedel diagram is one where the T axis in one frame is perpendicular of the X’ axis of the other frame (just as X is at right angle to T’). Look at this picture. Just eliminate the blue X axis and you have the Epstein-like diagram of your site (where such blue axis is represented, I think, by the yellow stick).
Thread Tools
| | | |
|---------------------------------------------------------------|---------------------------|---------|
| Similar Threads for: Visualisation of Brian Greene's concept. | | |
| Thread | Forum | Replies |
| | Beyond the Standard Model | 0 |
| | Beyond the Standard Model | 1 |
| | Beyond the Standard Model | 10 |
| | Beyond the Standard Model | 13 |
| | General Physics | 6 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9240455031394958, "perplexity_flag": "middle"}
|
http://johncarlosbaez.wordpress.com/2013/03/11/game-theory-part-20/
|
# Azimuth
## Game Theory (Part 20)
Last time we tackled von Neumann’s minimax theorem:
Theorem. For every zero-sum 2-player normal form game,
$\displaystyle{\min_{q'} \max_{p'} \; p' \cdot A q' = \max_{p'} \min_{q'} \; p' \cdot A q'}$
where $p'$ ranges over player A’s mixed strategies and $q'$ ranges over player B’s mixed strategies.
We reduced the proof to two geometrical lemmas. Now let’s prove those… and finish up the course!
But first, let me chat a bit about this theorem. Von Neumann first proved it in 1928. He later wrote:
As far as I can see, there could be no theory of games … without that theorem … I thought there was nothing worth publishing until the Minimax Theorem was proved.
Von Neumann’s gave several proofs of this result:
• Tinne Hoff Kjeldesen, John von Neumann’s conception of the minimax theorem: a journey through different mathematical contexts, Arch. Hist. Exact Sci. 56 (2001) 39–68.
In 1937 he gave a proof which became quite famous, based on an important result in topology: Brouwer’s fixed point theorem. This says that if you have a ball
$B = \{ x \in \mathbb{R}^n : \|x\| \le 1 \}$
and a continuous function
$f: B \to B$
then this function has a fixed point, meaning a point $x \in B$ with
$f(x) = x$
You’ll often seen Brouwer’s fixed point theorem in a first course on algebraic topology, though John Milnor came up with a proof using just multivariable calculus and a bit more.
After von Neumann proved his minimax theorem using Brouwer’s fixed point theorem, the mathematician Shizuo Kakutani proved another fixed-point theorem in 1941, which let him get the minimax theorem in a different way. This is now called Kakutani fixed-point theorem.
In 1949, John Nash generalized von Neumann’s result to nonzero-sum games with any number of players: they all have Nash equilibria if we let ourselves use mixed strategies! His proof is just one page long, and it won him the Nobel prize!
Nash’s proof used the Kakutani fixed-point theorem. There is also a proof of Nash’s theorem using Brouwer’s fixed-point theorem; see here for the 2-player case and here for the n-player case.
Apparently when Nash explained his result to von Neumann, the latter said:
That’s trivial, you know. That’s just a fixed point theorem.
Maybe von Neumann was a bit jealous?
I don’t know a proof of Nash’s theorem that doesn’t use a fixed-point theorem. But von Neumann’s original minimax theorem seems to be easier. The proof I showed you last time comes from Andrew Colman’s book Game Theory and its Applications in the Social and Biological Sciences. In it, he writes:
In common with many people, I first encountered game theory in non-mathematical books, and I soon became intrigued by the minimax theorem but frustrated by the way the books tiptoed around it without proving it. It seems reasonable to suppose that I am not the only person who has encountered this problem, but I have not found any source to which mathematically unsophisticated readers can turn for a proper understanding of the theorem, so I have attempted in the pages that follow to provide a simple, self-contained proof with each step spelt out as clearly as possible both in symbols and words.
There are other proofs that avoid fixed-point theorems: for example, there’s one in Ken Binmore’s book Playing for Real. But this one uses transfinite induction, which seems a bit scary and distracting! So far, Colman’s proof seems simplest, but I’ll keep trying to do better.
### The lemmas
Now let’s prove the two lemmas from last time. A lemma is an unglamorous result which we use to prove a theorem we’re interested in. The mathematician Paul Taylor has written:
Lemmas do the work in mathematics: theorems, like management, just take the credit.
Let’s remember what we were doing. We had a zero-sum 2-player normal-form game with an $m \times n$ payoff matrix $A$. The entry $A_{ij}$ of this matrix says A’s payoff when player A makes choice $i$ and player B makes choice $j$. We defined this set:
$C = \{ A q' : \; q' \textrm{ is a mixed strategy for B} \} \subseteq \mathbb{R}^m$
For example, if
$\displaystyle{ A = \left( \begin{array}{rrr} 2 & 10 & 4 \\-2 & 1 & 6 \end{array} \right) }$
then $C$ looks like this:
We assumed that
$\displaystyle{ \min_{q'} \max_{p'} \; p' \cdot A q' > 0}$
This means there exists $p'$ with
$\displaystyle{ p' \cdot A q' > 0}$
and this implies that at least one of the numbers $(Aq')_i$ must be positive. So, if we define a set $N$ by
$\displaystyle{ N = \{(x_1, \dots, x_m) : x_i \le 0 \textrm{ for all } i\} \subseteq \mathbb{R}^m }$
then $Aq'$ can’t be in this set:
$\displaystyle{ Aq' \notin N }$
In other words, the set $C \cap N$ is empty.
Here’s what $C$ and $N$ look like in our example:
Next, we choose a point in $N$ and a point in $C$:
• let $r$ be a point in $N$ that’s as close as possible to $C,$
and
• let $s$ be a point in $C$ that’s as close as possible to $r,$
These points $r$ and $s$ need to be different, since $C \cap N$ is empty. Here’s what these points and the vector $s - r$ look like in our example:
To finish the job, we need to prove two lemmas:
Lemma 1. $r \cdot (s-r) = 0,$ $s_i - r_i \ge 0$ for all $i,$ and $s_i - r_i > 0$ for at least one $i.$
Proof. Suppose $r'$ is any point in $N$ whose coordinates are all the same those of $r,$ except perhaps one, namely the $i$th coordinate for one particular choice of $i.$ By the way we’ve defined $s$ and $r$, this point $r'$ can’t be closer to $s$ than $r$ is:
$\| r' - s \| \ge \| r - s \|$
This means that
$\displaystyle{ \sum_{j = 1}^m (r_j' - s_j)^2 \ge \sum_{j = 1}^m (r_j - s_j)^2 }$
But since $r_j' = r_j$ except when $j = i,$ this implies
$(r_i' - s_i)^2 \ge (r_i - s_i)^2$
Now, if $s_i \le 0$ we can take $r'_i = s_i.$ In this case we get
$0 \ge (r_i - s_i)^2$
so $r_i = s_i.$ On the other hand, if $s_i > 0$ we can take $r'_i = 0$ and get
$s_i^2 \ge (r_i - s_i)^2$
which simplifies to
$2 r_i s_i \ge r_i^2$
But $r_i \le 0$ and $s_i > 0,$ so this can only be true if $r_i = 0.$
In short, we know that either
• $r_i = s_i$
or
• $s_i > 0$ and $r_i = 0.$
So, either way we get
$(s_i - r_i) r_i = 0$
Since $i$ was arbitrary, this implies
$\displaystyle{ (s - r) \cdot r = \sum_{i = 1}^m (s_i - r_i) r_i = 0 }$
which is the first thing we wanted to show. Also, either way we get
$s_i - r_i \ge 0$
which is the second thing we wanted. Finally, $s_i - r_i \ge 0$ but we know $s \ne r,$ so
$s_i - r_i > 0$
for at least one choice of $i.$ And this is the third thing we wanted! █
Lemma 2. If $Aq'$ is any point in $C$, then
$(s-r) \cdot Aq' \ge 0$
Proof. Let’s write
$Aq' = a$
for short. For any number $t$ between $0$ and $1$, the point
$ta + (1-t)s$
is on the line segment connecting the points $a$ and $s.$ Since both these points are in $C,$, so is this point $ta + (1-t)s,$ because the set $C$ is convex. So, by the way we’ve defined $s$ and $r$, this point can’t be closer to \$r\$ than $s$ is:
$\| r - (ta + (1-t)s) \| \ge \| r - s \|$
This means that
$\displaystyle{ (r + (1-t)s - ta) \cdot (r + (1-t)s - ta) \ge (r - s) \cdot (r - s) }$
With some algebra, this gives
$\displaystyle{ 2 (a - s)\cdot (s - r) \ge -t (a - s) \cdot (a - s) }$
Since we can make $t$ as small as we want, this implies that
$\displaystyle{ (a - s)\cdot (s - r) \ge 0 }$
or
$\displaystyle{ a \cdot (s - r) \ge s \cdot (s - r)}$
or
$\displaystyle{ a \cdot (s - r) \ge (s - r) \cdot (s - r) + r \cdot (s - r)}$
By Lemma 1 we have $r \cdot (s - r) \ge 0,$ and the dot product of any vector with itself is nonnegative, so it follows that
$\displaystyle{ a \cdot (s - r) \ge 0}$
And this is what we wanted to show! █
### Conclusion
Proving lemmas is hard work, and unglamorous. But if you remember the big picture, you’ll see how great this stuff is.
We started with a very general concept of two-person game. Then we introduced probability theory and the concept of ‘mixed strategy’. Then we realized that the expected payoff of each player could be computed using a dot product! This brings geometry into the subject. Using geometry, we’ve seen that every zero-sum game has at least one ‘Nash equilibrium’, where neither player is motivated to change what they do—at least if they’re rational agents.
And this is how math works: by taking a simple concept and thinking about it very hard, over a long time, we can figure out things that are not at all obvious.
For game theory, the story goes much further than we went in this course. For starters, we should look at nonzero-sum games, and games with more than two players. John Nash showed these more general games still have Nash equilibria!
Then we should think about how to actually find these equilibria. Merely knowing that they exist is not good enough! For zero-sum games, finding the equilibria uses a subject called linear programming. This is a way to maximize a linear function given a bunch of linear constraints. It’s used all over the place—in planning, routing, scheduling, and so on.
Game theory is used a lot by economists, for example in studying competition between firms, and in setting up antitrust regulations. For that, try this book:
• Lynne Pepall, Dan Richards and George Norman, Industrial Organization: Contemporary Theory and Empirical Applications, Blackwell, 2008.
For these applications, we need to think about how people actually play games and make economic decisions. We aren’t always rational agents! So, psychologists, sociologists and economists do experiments to study what people actually do. The book above has a lot of case studies, and you can learn more here:
• Andrew Colman, Game Theory and its Applications in the Social and Biological Sciences, Routledge, London, 1982.
As this book title hints, we should also think about how game theory enters into biology. Evolution can be seen as a game where the winning genes reproduce and the losers don’t. But it’s not all about competition: there’s a lot of cooperation involved. Life is not a zero-sum game! Here’s a good introduction to some of the math:
• William H. Sandholm, Evolutionary game theory, 12 November 2007.
For more on the biology, get ahold of this classic text:
• John Maynard Smith, Evolution and the Theory of Games, Cambridge University Press, 1982.
And so on. We’ve just scratched the surface!
This entry was posted on Monday, March 11th, 2013 at 10:41 pm and is filed under game theory, mathematics. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.
### 4 Responses to Game Theory (Part 20)
1. Reeve says:
This has been a fun class! Thanks for everything!
Any chance you might continue this series casually every now and then, touching on more advanced and interesting topics like evolutionary and Bayesian game theory? I’d certainly read it, and I think many of your Google+ followers would enjoy it too!
• John Baez says:
I’m glad you liked the class!
If you want to read my thoughts on evolutionary game theory, start with these posts in my Information Geometry series:
• Part 8 – information geometry and evolution: how natural selection resembles Bayesian inference, and how it’s related to relative entropy. (website version)
• Part 9 – information geometry and evolution: the replicator equation and the decline of entropy as a successful species takes over. (website version)
• Part 10 – information geometry and evolution: how entropy changes under the replicator equation. (website version)
• Part 11 – information geometry and evolution: the decline of relative information. (website version)
• Part 12 – information geometry and evolution: an introduction to evolutionary game theory. (website version)
• Part 13 – information geometry and evolution: the decline of relative information as a population approaches an evolutionarily stable state. (website version)
I have a lot more I’d like to say about this, and a lot more I want to figure out, but this should keep you busy for a little while! Feel free to post comments and questions.
2. James Juniper says:
You may be interested to know that the Japanese Mathematician (and Mathematical Economist) Nikaido sets out a proof of von Neumann’s minimax theorem in his 1960 text, in addition, discussing its relationship to the fix point theorems and Perron-Frobenius theorems:
• Hukukane Nikaido, Introduction to Sets and Mappings in Modern Economics, North Holland, 1960.
Cheers, James
• John Baez says:
Thanks! Now that you mention it, the Perron–Frobenius theorem is another result about linear algebra that’s sometimes proved with the help of the Brouwer fixed point theorem!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 105, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9221534132957458, "perplexity_flag": "head"}
|
http://stats.stackexchange.com/questions/3286/distance-between-empirically-generated-distributions-in-r
|
# Distance between empirically generated distributions (in R)
I'm not a statistician, but I sometimes need I play around with data. I have two data sets, lists of values in the unit interval. I've plotted them as histograms, so I have an intuitive idea of how "far apart" they are. But I want to something a little more formal.
My first thought was to just sum the the differences of the values in the bins, but this isn't that satisfactory. Then I thought of taking a three bin average and sum differences over these. (Apologies if I'm mangling statistics terminology)
But I was thinking I'm probably reinventing the wheel, so I came here. Similar questions seem to point to "Kolmogorov Smirnov tests" or something like that.
So my question is this: is this the right method to calculate how far these data sets are apart? And is there an easy way to do this in R? Ideally just `KStest(data1,data2)` or something.
Edit To emphasise, I'm particularly interested in ways to measure how far the data are apart directly rather than fitting a distribution to each and then measuring the distance between distributions. [Does that even make sense? I guess numerical calculations in R will be done by sampling from a distribution anyway...]
-
2
– user603 Oct 3 '10 at 21:33
– mbq♦ Oct 4 '10 at 10:22
## 3 Answers
You can do a Kolmogorov-Smirnov test using the `ks.test` function. See `?ks.test`.
In general, when you are looking for a function in R (and you don't know its name) try using `??`. For instance, `??"Kolmogorov Smirnov"`. If nothing comes up `RSiteSearch("whatever you're looking for")` should help :)
-
2
To make it clearer, here you will rather need Kolomogorov-Smirnov distance than KS-test. – mbq♦ Oct 4 '10 at 10:22
Bonus marks for explaining how to find out stuff for myself. – Seamus Oct 4 '10 at 17:03
## Did you find this question interesting? Try our newsletter
email address
A standard way to compare distributions is to use the Kullback-Leibler divergence. As usual, there's an R package that does this for you! From the `?KLdiv` help page in the `flexmix` package, we get the following bit of code:
````## Gaussian and Student t are much closer to each other than
## to the uniform:
> library(flexmix)
> x = seq(-3, 3, length=200)
> y = cbind(u=dunif(x), n=dnorm(x), t=dt(x, df=10))
> matplot(x, y, type="l")
> round(KLdiv(y),3)
u n t
u 0.000 1.082 1.108
n 4.661 0.000 0.004
t 4.686 0.005 0.000
````
Notice that the comparison isn't symmetric: so uniform vs Normal is different from Normal vs Uniform.
You didn't explain why you wanted to compare distributions. Giving a use-case may get you more specific answers.
-
First thing : define "distance". Sounds like a stupid question, but what do you mean as distance? Is the data paired? Then -and only then- it makes sense to look at the sum of (squared) differences to decide about the distance between two datasets. If not, you have to resort to other means.
Next question is : is the data distributed in the same manner? If so, you can see the difference between the means as the "location shift" of your data (or the distance between both datasets).
But if neither of both is true, how do you define the distance between datasets then? Do you take shape of the distribution into account for example? You really have to think about those issues before trying to calculate a distance.
This said : One (naive) possibility is to use the mean of the differences between all possible x-y combinations. Formalized this is :
$$Dist=\sqrt{\frac{1}{n_1 n_2}\sum_{i=1}^{n_1} \sum_{j=1}^{n_2}(X_i - Y_j)^2}$$
In R :
````x <- rnorm(10)
y <- rnorm(10,2)
sqrt(mean(outer(x,y,"-")^2))
````
If you allow for negative distances, you can drop the sqrt and the ^2 :
````mean(outer(x,y,"-"))
````
A simulation shows easily that this will give indeed the difference between the means in the example, as both distributions are equal in this case. But be warned that negative distances are not allowed in many applications. In the first scenario, the number will always a bit larger than the difference between the mean. In any case, if you're interested in the difference between the center of your datasets, define the center and calculate the difference between those centers. That might very well be what you're after.
Contrary to the other suggestions, this approach does not make any assumptions about the distribution of your data. Which makes it applicable in all situations, but also difficult to interprete.
-
default
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9322760701179504, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/44269/real-non-constant-scalar-field-with-special-properties-in-class-of-4-dimensiona/44689
|
# Real, non-constant scalar field with special properties in class of 4-dimensional spacetimes
David Deutsch (Oxford University) asked the following question which I think is an interesting one:
In what class of 4-dimensional spacetimes does there exist a real, non-constant scalar field φ with the following properties:
• It obeys the wave equation: ◻φ=0
• Its gradient is everywhere null: ∇φ.∇φ=0
Deutsch would "like the answer to be 'almost none'" but I am really not sure...
-
Are $\mathcal{M}\times \mathbb{R}$ candidates, with $\varphi(x^\mu)=\omega\cdot x^0+\lambda$? – Nick Kidman Nov 15 '12 at 14:16
1
By gradient, you mean the 3d gradient? Or a 3+1d analogue? – Muphrid Nov 15 '12 at 14:49
@Muphrid: How would the answer differ with either one? – vonjd Nov 15 '12 at 15:07
Can you explain the context, and why he would "like the answer to be 'almost none'"? A link would suffice, if this is from an article, or online talk. – Rhys Nov 20 '12 at 12:48
@Rhys: I just wanted to do that - yet his whole site seems to have disappeared?!? - I stay on it... – – vonjd Nov 21 '12 at 11:19
## 2 Answers
There are many such spacetimes. Already the Minkowski space, $g_{\mu\nu}={\rm const}$, has a non-constant solution $\varphi$ (in either interpretation 1 or interpretation 2 of the question(v1), cf. Muphrid's comment). The wave eq. in a curved spacetime reads
$$\sum_{\mu,\nu=0}^3\partial_{\mu}\sqrt{-g}g^{\mu\nu}\partial_{\nu}\varphi~=~0.$$
1. If e.g. the metric $g_{\mu\nu}$ is of the form $$g_{\mu\nu}~=~\left[ \begin{array}{cc} -1 & 0 \\ 0 &g_{ij}(x^1,x^2,x^3) \end{array} \right], \qquad \mu,\nu=0,1,2,3,\qquad i,j=1,2,3,$$ and $$\sum_{i,j=1}^3(\partial_{i}\varphi)g^{ij}(\partial_{j}\varphi)~=~0,$$ then we can pick an affine function in time $$\varphi(x)~=~ ax^0+b,$$ as Nick Kidman suggests in a comment.
2. If e.g. the metric $g_{\mu\nu}$ is on light-cone form $$g_{\mu\nu}~=~\left[ \begin{array}{cc} 0 & -1 & 0 \\-1 & 0 & 0 \\ 0 & 0 &g_{ij}(x^2,x^3) \end{array} \right], \qquad \mu,\nu=+,-,2,3,\qquad i,j=2,3,$$ and $$\sum_{\mu,\nu}(\partial_{\mu}\varphi)g^{\mu\nu}(\partial_{\nu}\varphi)~=~0,$$ then we can e.g. pick an arbitrary function $$\varphi(x)~=~ f(x^+),$$ where we have used light-cone coordinates $x^{\pm}=\frac{1}{\sqrt{2}}(x^0\pm x^1)$.
NB: It is possible that David Deutsch's claim in simplified terms essentially boils down to the following. Put some measure $\mu$ on the space ${\cal M}$ of all metrics $g_{\mu\nu}$ on, say, spacetime $\mathbb{R}^4$, and consider the subset ${\cal N}\subseteq{\cal M}$ of metrics $g_{\mu\nu}$ that admit a non-constant solution for $\varphi$. David Deutsch's phrase almost none should then be understood as that the subset ${\cal N}$ has measure zero, $\mu({\cal N})=0$. If OP's actual question is whether $\mu({\cal N})$ is zero or not in that sense, then my above answer is insufficient.
-
1
Why are there many? – MBN Nov 16 '12 at 11:27
1
I updated the answer. – Qmechanic♦ Nov 16 '12 at 13:22
Raychaudhuri.
Let $V_\mu \equiv \partial _\mu \phi$. null foliation.
null condition: $V^\mu V_\mu =0$.
exterior derivative: $\partial_\mu V_\nu = \partial_\nu V_\mu$, i.e. zero vorticity.
$\square \phi=0$ means zero expansion $\hat\theta =V^\mu{}_{;\mu}=0$.
gradient of null condition: $V^\mu V_{\mu;\nu}=0$
contract exterior derivative: $V^\mu V_{\mu;\nu}=V^\mu V_{\nu;\mu}=(\nabla_{\bf V} {\bf V})_\nu=0$, i.e. zero acceleration, i.e. null geodesics.
Raychaudhuri's null equation when the expansion and vorticity are zero: $2\hat\sigma^2 +T_{\mu\nu}V^\mu V^\nu =0$.
assume $\phi$ is real. then, $\bf V$ is also real and $\hat\sigma^2$ is nonnegative.
Assume there exists a point x where the null energy is always positive for all null directions. Then, no solution exists. This remark doesn't apply for complex $\phi$.
If there's no such point, but if there exists a point x such that for all null geodesics passing through it with nonpositive null energy at x, there always exists another point y on the null geodesic such that the null energy is positive along it there, no solution exists either.
Consider the subclass of Ricci flat metrics. Then, reality means the shear has to be zero everywhere. This means $\bf V$ describes a null Killing vector field. In general, none exist.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8981512784957886, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/116133?sort=newest
|
Explicit Computations of Examples in Spin Geometry
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I have been trying to learn about spin geometry, Dirac operators, and index theory by reading Lawson/Michelson's "Spin Geometry" and Friedrich's "Dirac Operators in Riemannian Geometry." Both are abstract, and basically no explicit examples are worked all the way through.
For example, I have been trying to find the spinor bundles, Dirac operators, and various indices for relatively simple manifolds: spheres and tori. However often these computations are detailed and even when I get to the end, it's not clear that I've done it correctly.
Is there another book, or perhaps online notes, which have a bunch of examples worked through in detail so that I can make sure what I'm doing is correct and also have a bank of examples to look at as I progress?
-
1
Take a look at "Twistors and Killing spinors on Riemannian manifolds", by Baum, Friedrich, Grunewald and Kath. It has many examples. – José Figueroa-O'Farrill Dec 12 at 3:28
2
I recommend Bott's paper "The Index Theorem for Homogeneous Differential Operators" and papers on index theory over homogeneous spaces in general. The computations can be done very systematically and explicitly using representation theory, and it includes most basic examples of manifolds. – Paul Siegel Dec 12 at 4:16
1
In terms of additional resources, pretty much all introductory accounts of spectral triples will at least sketch basic theory for and examples of Dirac operators. Joseph Varilly's lecture notes on spectral triples (toknotes.mimuw.edu.pl/sem3/index.html) include very detailed, step-by-step exercises working through the spin geometry of the circle, 2-torus, and 2-sphere, though without covering any index theory at all. – Branimir Ćaćić Dec 12 at 5:33
2
Try Chapter 11 of these notes nd.edu/~lnicolae/Lectures.pdf – Liviu Nicolaescu Dec 12 at 14:57
1 Answer
Appendix A to Chapter 9 of the book Elements of Noncommutative Geometry by Gracia-Bondia, Varilly, and Figueroa is titled "Spin geometry of the Riemann sphere". It is 15 pages long and goes into quite some detail. (Some might call that level of detail excruciating, but YMMV.)
As Paul Siegel notes, computations on homogeneous spaces can be done quite effectively using representation theory. Some years ago, in the course of learning about that approach, I wrote up an account of the construction of the spinor bundle, Dirac operator, etc on $S^2$, viewed as the homogeneous space $SU(2)/U(1)$. If you're interested, email me (you can find my email address at my website, linked in my profile) and I can send it to you.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9254353642463684, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/178523-method-least-squares.html
|
# Thread:
1. ## Method of Least Squares
Question:
A capacitor C is initially charged to 10V and then connected across a resistor R. The current through R is measured at 1 milisecond intervals is:
t : 0 1 2 3 4 ms +-0.5%
i : 4.5 2.8 1.5 1.0 0.6 mA +-2.5%
(Basically this is a table where i corresponds to the t value)
Using the method of least squares, find R and C.
I need help on what equation I should use. Is it i=ie-(t/RC)?
2. Originally Posted by Salcybercat
Question:
A capacitor C is initially charged to 10V and then connected across a resistor R. The current through R is measured at 1 milisecond intervals is:
t : 0 1 2 3 4 ms +-0.5%
i : 4.5 2.8 1.5 1.0 0.6 mA +-2.5%
(Basically this is a table where i corresponds to the t value)
Using the method of least squares, find R and C.
I need help on what equation I should use. Is it i=ie-(t/RC)?
If you meant I = I0 * e^{-t/(RC)}, then yes. So take the ln of both sides to do your linear regression.
-Dan
3. Just to make sure, I0 would be the initial current when t=0, which would make the equation I = 4.5 * e^{-t/(RC)}. Am I right?
4. Originally Posted by Salcybercat
Just to make sure, I0 would be the initial current when t=0, which would make the equation I = 4.5 * e^{-t/(RC)}. Am I right?
I0 is one of the pieces of information you want to calculate. Take the ln of both sides of the equation:
$I = I_0 e^{-t/(RC)}$
$ln(I) = ln \left ( I_0 e^{-t/(RC)} \right )$
$ln(I) = ln(I_0) - \frac{1}{RC} \cdot t$
$ln(I) = -\frac{1}{RC} \cdot t + ln(I_0)$
The advantage of this form is that the equation is now in the form y = mx + b where y = ln(I) and x = t. So to do the linear regression use the t values as your x data and ln(I) values as your y data. You will come out with a slope and an intercept. The slope will be equal to -1/(RC) and the intercept will be ln(I0).
-Dan
5. Thank you for the clarification! However, I still didn't manage to get the answer. (R=2.2kohm and C = 0.89microFarad)
I've attached my workings below. I hope it's clear enough for you to understand. Do comment on any mistaken steps.
6. Originally Posted by Salcybercat
Thank you for the clarification! However, I still didn't manage to get the answer. (R=2.2kohm and C = 0.89microFarad)
I've attached my workings below. I hope it's clear enough for you to understand. Do comment on any mistaken steps.
I cheated and did the regression on my calculator. My answers are close, but different from yours. If you are allowed to use a computer to do the problem I'd recommend it.
I get m = 0.505943 x 10^3 /s and c = 1.497552. Thus I'm getting 1/(RC) = 0.505943 x 10^3 /s and I0 = 4.47073 x 10^{-3} A.
Now we have to get a bit wily. We only have one number to get RC so we need another relationship. We get that through the definition of capacitance:
C = Q/V, where Q is the initial charge on the capacitor (unknown at this point) and V is the charging potential 10 V. So...
$\frac{1}{RC} = \frac{1}{R \frac{Q}{V}} = \frac{V}{QR}$
Now recall that V = I0*R when the capacitor starts to discharge:
$\frac{1}{RC} = \frac{V}{QR} = \frac{I_0}{Q}$
and we have values of 1/(RC) and I0 from the regression. So you can find Q. From Q you can find C. From C you can find R.
Using my values I get your given answers. Your values give answers slightly off from that. I am at a loss as to what advice to give you to allow you to calculate better numbers.
-Dan
7. I must've done something wrong rounding up the numbers when calculating it manually (and no, we wouldn't be allowed to use computers during our exams )
Yes I was confused about what other equations I needed to use to compare with the RC-equation. ( I thought of using the equation of i = C (dv/dt) but I wasn't sure if the method of least squares would work with a differential equation.
But thank you for the step-to-step explanation! I got the answers
8. Originally Posted by Salcybercat
I must've done something wrong rounding up the numbers when calculating it manually (and no, we wouldn't be allowed to use computers during our exams )
Bummer. I particularly hate doing these "by hand." Glad you got the answers and good luck on the exams!
-Dan
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9398637413978577, "perplexity_flag": "middle"}
|
http://oop123.wordpress.com/
|
# oop123
Java Beginner Rambling
September 3, 2011 / oop123
You can see the problem here: Problem #159.
This is my 100th question, woot! Now I just need to complete another 50 questions to become level 4. I doubt I can do another 20 questions, let alone 50, but I will try anyway. Now that little “celebration” is over with, let’s get back to the question. This question is pretty easy. The digital root of a number (in base 10) can be found using the following piecewise function (I love google):
$f(n) = \begin{cases} n\ (mod\ 9) & n\ (mod\ 9) \neq 0 \\ 9 & n\ (mod\ 9) = 0 \end{cases}$
In plain English, for any number n, find the remainder after dividing n by 9. The digital root of n will be the remainder, unless the remainder is 0, in which case the digital root is 9. Kudos for you if you know why this works (it’s not that hard, hint: it’s related to the fact if a number’s digit sum is divisible by 9, then the number itself is divisible by 9).
The maximal Digital Root Sum (MDRS) of each number can be found with dynamic programming since you can start with the MDRS of small numbers and use them to find the MDRS of the bigger numbers. For those of you who can’t google, dynamic programming involves solving complex problem by breaking it down into sub-problems (usually smaller version of the bigger problem) and doing those instead.
```final int SIZE = 1000000;
int[] drs = new int[SIZE];
for (int i = 2; i < SIZE; i++) {
//prime number: mdrs = digital root
if (drs[i] == 0) {
drs[i] = i % 9;
if (drs[i] == 0) drs[i] = 9;
}
//a number's digital root can be its mdrs
else if (drs[i] < 10) { //a single number's digital root cannot exceed 10
int candidate = i % 9;
if (candidate == 0) candidate = 9;
drs[i] = Math.max(drs[i], candidate);
}
int currentNum = i;
for (int factor = 2; factor <= i; factor++) {
currentNum += i;
if (currentNum >= SIZE) break;
drs[currentNum] = Math.max(drs[i] + drs[factor], drs[currentNum]);
}
}
//find answer
int sum = 0;
for (int num : drs) sum += num;
System.out.println(sum);
```
Runs in around 225 milliseconds on my computer. CU
Filed under Project Euler, Quetion 151 - 175
August 29, 2011 / oop123
You can see the problem here: Problem #119.
I brute-forced the problem, and my code is very ugly. It contains a loop that search for as, and the loop is pretty much infinite because of the large numbers involve. My code only print out possible a30, so you also need to try the numbers to find the real solution (luckily, for 30, it gives out the solution right away). For large number, my code fails miserably, but it can cope with a small number like 30. I solved the question, so whatever.
```final int TOLERABLE_LIMIT = 3;
List<BigDecimal> a = new ArrayList<BigDecimal>();
int i = 7;
//loop - break condition at the beginning of loop
while (true) {
i++;
//break if i * i bigger than the current a[30] - meaning a[30] is for sure the answer
//it takes so long for this condition to be fulfilled this is pretty much infinite
if (a.size() >= 30 && a.get(29).compareTo(BigDecimal.valueOf(i * i)) < 0) {
break;
}
//just slightly improve performace, and more importantly, take care of infinite loop involving 10, 100, 1000...
int tempI = i;
while (tempI % 10 == 0) tempI /= 10;
if (tempI == 1) continue;
//loop through the powers of current number, break out of loop if one of the following happens:
//- if the digit sums exceed the current number three times in a row (a heuristic)
//- if there is an a[30] and the current power exceeds it, no need to find a's that are bigger than a[30]
BigDecimal number = BigDecimal.valueOf(tempI);
BigDecimal currentPow = number.pow(2);
int numOfFailures = 0;
while (true) {
currentPow = currentPow.multiply(number);
int digitSum = digitSum(currentPow);
if (digitSum == i) {
if (!a.contains(currentPow)) a.add(currentPow);
Collections.sort(a);
if (a.size() >= 30) System.out.println(a.get(29));
} else if (digitSum > i) {
numOfFailures++;
if (numOfFailures >= TOLERABLE_LIMIT) break;
} else {
numOfFailures = 0;
}
if (a.size() >= 30 && currentPow.compareTo(a.get(29)) > 0) break;
}
}
```
Runs in <1 millisecond on my computer. The <1 millisecond is the time it took for the answer to be found and printed, and does not involve the huge long (basically infinite) loop. My solution is such a failure, sigh, CU
Filed under Project Euler, Question 101 - 125
August 21, 2011 / oop123
You can see the problem here: Problem #102.
I cheated a little and googled for algorithms to solve this problem. Turns out there are many ways to test whether a triangle contains the origin. The algorithm I choose involves finding the triangle’s intersection with the y-axis. It’s based on the fact a triangle contains the origin iff (if and only if) it intersects the y-axis at two points – 1 above the origin and 1 below – OR when one of its vertex is the origin.
```public static boolean containsOrigin(int ax, int ay, int bx, int by, int cx, int cy) {
//one of the three vertex is origin?
if ((ax == 0 && ay == 0) || (bx == 0 && by == 0) || (cx == 0 && cy == 0)) return false;
List<Double> intersections = new ArrayList<Double>(2);
//ax == 0: ay is an intersection
//otherwise, find the intersection (note how the two conditions remove possibility of counting a vertex that's
//on the y-axis as an intersection twice)
if (ax == 0) intersections.add((double) ay);
else if ((ax < 0 && bx > 0) || (ax > 0 && bx < 0)) {
double slope = (((double) by) - ay) / (bx - ax);
intersections.add(-slope * ax + ay);
}
//some copy and paste and changing single letters
if (bx == 0) intersections.add((double) by);
else if ((bx < 0 && cx > 0) || (bx > 0 && cx < 0)) {
double slope = (((double) cy) - by) / (cx - bx);
intersections.add(-slope * bx + by);
}
if (cx == 0) intersections.add((double) cy);
else if ((cx < 0 && ax > 0) || (cx > 0 && ax < 0)) {
double slope = (((double) ay) - cy) / (ax - cx);
intersections.add(-slope * cx + cy);
}
if (intersections.size() < 2) return false;
return ((intersections.get(0) > 0 && intersections.get(1) < 0) ||
(intersections.get(0) < 0 && intersections.get(1) > 0));
}
```
Then we just need to read the text file and pass all the coordinates into the above function.
```BufferedReader in = new BufferedReader(new FileReader("triangles.txt"));
int answer = 0;
for (int i = 0; i < 1000; i++) {
String[] nums = in.readLine().split(",");
if (containsOrigin(
Integer.parseInt(nums[0]), Integer.parseInt(nums[1]),
Integer.parseInt(nums[2]), Integer.parseInt(nums[3]),
Integer.parseInt(nums[4]), Integer.parseInt(nums[5])))
answer++;
}
System.out.println(answer);
```
Runs in around 40 milliseconds on my computer. CU
Filed under Project Euler, Question 101 - 125
August 18, 2011 / oop123
You can see the problem here: Problem #91.
This problem is pretty easy.
As you can see from the first pictures, if both coordinates are located at the sides of the grid, we can make a right triangle. That right triangle can also be transformed into two other right triangles, as seen from the remaining two triangles. Since there 50 * 50 = 2500 ways to choose those two coordinates, we already know there are at least 2500 * 3 = 7500 right triangles. Now we just need to figure out a way to find the rest of the triangles.
Let’s say we want to find a right triangle with coordinate P = (1, 2). If we draw a line starting from the origin O to P, the line will have a slope of 2. Then if we draw a line with a slope -1/2, the negative reciprocal of 2, it will be perpendicular to OP. Since the other coordinate Q needs to contain integers, we will find Q by moving P 2 blocks to the right and 1 block down (using slope = -1/2 and definition of slope). See the pictures below for a more visual explanation. The red line has a slope of 2 (1 block right, 2 blocks up) and the blue line has a slope of -1/2 (2 blocks right, 1 block down).
The above right triangle can also be reflected along the line y = x to make another right triangle.
Also, we can extend PQ another 2 blocks to the right and 1 block down and make a different right triangle.
There is one last thing to note about. Let’s say the P coordinate is (2, 4), the negative reciprocal of its slope would be -2/4. We need to simplify -2/4 to -1/2 before we can use it in the algorithm, or else we can miss some right triangles. Taken all the above points into account, here is the code (surprisingly short for a Project Euler problems):
```final int SIZE = 50;
int answer = SIZE * SIZE * 3;
for (int x = 1; x <= SIZE; x++) {
for (int y = 1; y <= SIZE; y++) {
int gcf = EulerMath.GCF(x, y); //bunch of small utility methods I kept in EulerMath
int dx = x / gcf;
int dy = y / gcf;
answer += Math.min(y / dx, (SIZE - x) / dy) * 2;
}
}
System.out.println(answer);
```
Runs in ~3 milliseconds on my computer. Note that the problem can also be easily brute force due to its small size, oh well. CU
Filed under Project Euler, Question 76 - 100
August 15, 2011 / oop123
You can see the problem here: Problem #88.
This problem is easier than it looks. As you can see from the question, we can add 1s to a list of factor to change its sum but keep its product the same. For example, (2 * 3) = 6, but 2 + 3 only equals 5. We can add 1 to the list of factors [(2 * 3) -> (1 * 2 * 3)] which changes the list’s sum to 6 but keeps its product at 6. Using this observation, we can derive a simple formula that can calculate the k for any list of factors:
k = number_of_factors – sum_of_factors + product_of_factors
So let’s say the list of factors is (2 * 3 * 4), the product_of_factors would be 24, the number_of_factors would be 3, and the sum_of_factors would be 9. Plug those into the above formula, and we get k = 18.
To solve this question, I used a dynamic programming approach again in which I used an array to accumulate all the lists of factors. However, why calculate the lists of factors when we can just keep track of their sum_of_factors and number_of_factors? After I finished the question, I realized the number_of_factors and sum_of_factors can be combined into a single number to be keep track of (number_of_factors – sum_of_factors, see the formula for k), leading to the following (messy) code.
```final int SIZE = 12000;
final int RANGE = 24000; //P.S. RANGE can be smaller, down to 12,200
int[] k = new int[SIZE + 1];
List<Set<Integer>> nums = new ArrayList<Set<Integer>>(RANGE + 1);
for (int i = 0; i <= RANGE; i++) nums.add(new HashSet<Integer>());
//dynamically calculate all the k's
for (int i = 2; i <= RANGE / 2; i++) {
nums.get(i).add(-i + 1);
for (int num : nums.get(i)) {
int current = i + i;
int new_num = num - 1;
for (int j = 2; j <= i && current <= RANGE; j++) {
nums.get(current).add(new_num);
int pk = current + new_num;
if (pk <= SIZE && (current < k[pk] || k[pk] == 0)) k[pk] = current;
new_num--;
current += i;
}
}
}
//show the answer
boolean success = true;
for (int i = 2; i < k.length; i++) if (k[i] == 0) success = false;
if (success) {
k = ArraysUtils.sortAndRemoveDuplicates(k);
int sum = 0;
for (int i : k) sum += i;
System.out.println(sum);
}
```
Runs in around 500 milliseconds on my computer (200 milliseconds if RANGE = 13000). CU
Filed under Project Euler, Question 76 - 100
August 10, 2011 / oop123
You can see the problem here: Problem #95.
Since I am getting stuck on one of Project Euler’s problem (a bug somewhere I can’t find), I decided to find another (an easy) problem to do – voila, problem 95.
### Part 1: Getting the number’s sum of divisors
This is very easy: just use a sieve to accumulate the sums.
```final int SIZE = 1000000;
//initialize all the sums
int[] sums = new int[SIZE + 1];
Arrays.fill(sums, 1);
for (int i = 2; i <= SIZE / 2; i++) {
for (int j = i + i; j <= SIZE; j += i) {
sums[j] += i;
}
}
```
### Part 2: Solving the problem
A dynamic programming approach to this problem makes solving it a breeze. First, we create an integer array `chainNum` to store any calculated amicable chain lengths. `chainNum` can also store -1s to indicate a number cannot form a chain. Then we just loop from 1 to 1,000,000 and attempt to make an amicable chain with each number. If we found a number cannot form an amicable chain, we will update the corresponding index in `chainNum` with -1. Also, when we found an amicable chain, `chainNum` will be updated accordingly with the chain’s length.
```//***** initalize chainNum *****
//-1 means no chain exists for the number
//0 means we haven't calculated the chainNum yet
int[] chainNum = new int[SIZE + 1];
chainNum[1] = -1;
chainNum[2] = -1;
chainNum[3] = -1;
chainNum[4] = -1;
//***** finding all the chainNum *****
for (int i = 5; i <= SIZE; i++) {
if (chainNum[i] != 0) continue; //skip numbers that has been tested
List<Integer> chain = new ArrayList<Integer>();
chain.add(i);
int next = sums[i];
while (true) {
//if the chain fails, update chainNum
if (next > SIZE || chainNum[next] != 0) {
for (Integer n : chain) chainNum[n] = -1;
break;
}
//if the a chain is found, update chainNum
int index = chain.indexOf(next);
if (index >= 0) {
//there may be numbers that leads to the chain but is not actually contain by the chain
for (int j = 0; j < index; j++) chainNum[chain.get(j)] = -1;
for (int j = index; j < chain.size(); j++) chainNum[chain.get(j)] = chain.size();
break;
}
chain.add(next);
next = sums[next];
}
}
//***** just search for the largest chain number *****
int answer = 0;
int largest = 0;
for (int i = 2; i <= SIZE; i++) {
if (chainNum[i] > largest) {
largest = chainNum[i];
answer = i;
}
}
System.out.println(answer);
```
Runs in around 800 milliseconds on my computer. Surprisingly easy. CU
Filed under Project Euler, Question 76 - 100
August 3, 2011 / oop123
You can see the problem here: Problem #83.
This is a simple question once I learned the Dijkstra’s algorithm (here’s wikipedia entry on it). The following codes are just an implementation of it, nothing too interesting. I used the same algorithm for question #82 too (just some small changes).
.main(String[] args) body:
```final int SIZE = 80;
BufferedReader in = new BufferedReader(new FileReader("matrix.txt"));
//read in all the Nodes
Node[][] nodes = new Node[SIZE][SIZE];
for (int i = 0; i < SIZE; i++) {
String[] temp = in.readLine().split(",");
for (int j = 0; j < SIZE; j++) {
nodes[i][j] = new Node(Integer.parseInt(temp[j]), i, j);
}
}
//initialize beginning
PriorityQueue<Node> notVisited = new PriorityQueue<Node>();
nodes[0][0].totalCost = nodes[0][0].travelCost;
notVisited.add(nodes[0][0]);
//algorithm
for (int i = 0; i < SIZE * SIZE - 1; i++) {
Node node = notVisited.remove();
node.visited = true;
int row = node.row;
int column = node.column;
if (row == SIZE - 1 && column == SIZE - 1) break;
if (row != 0) {
update(node, nodes[row - 1][column], notVisited);
}
if (row != SIZE - 1) {
update(node, nodes[row + 1][column], notVisited);
}
if (column != SIZE - 1) {
update(node, nodes[row][column + 1], notVisited);
}
if (column != 0) {
update(node, nodes[row][column - 1], notVisited);
}
}
System.out.println(nodes[SIZE - 1][SIZE - 1].totalCost);
```
The `Node` class and one of the method for the algorithm.
```static class Node implements Comparable<Node> {
public int totalCost = Integer.MAX_VALUE;
public boolean visited = false;
public int travelCost;
public int row;
public int column;
public Node(int cost, int row, int column) {
travelCost = cost;
this.row = row;
this.column = column;
}
@Override
public int compareTo(Node node) {
if (node.totalCost > this.totalCost) {
return -1;
} else if (node.totalCost < this.totalCost) {
return 1;
}
return 0;
}
}
static void update(Node start, Node target, PriorityQueue<Node> queue) {
start.visited = true;
if (start.totalCost + target.travelCost < target.totalCost) {
target.totalCost = start.totalCost + target.travelCost;
}
if (!target.visited && !queue.contains(target)) {
queue.add(target);
}
}
```
Runs in around 125 milliseconds on my computer. Nothing to this problem if you know (how to google for) path finding algorithms.
Blog at WordPress.com. Theme: Paperpunch.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8976162075996399, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/tagged/invariance+elementary-number-theory
|
# Tagged Questions
2answers
108 views
### Using the invariance principle: how to solve $n+d(n)+d(d(n))=m$?
Let $d(n)$ be the digital sum of $n$. How to solve $n+d(n)+d(d(n))=m$, where $n$ and $m$ are natural?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8283922672271729, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/101598/homotopy-of-random-simplicial-complexes/101626
|
## Homotopy of random simplicial complexes
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
A random graph on $n$ vertices is defined by selectiung the edges according to some probability distribution, the simplest case being the one where the edge between any two vertices exists with probability $p = \frac{1}{2}$. I believe this is the Erdős–Rényi model $G(n,p)$ for generating random graphs.
Similarly, in higher dimensions we can construct random simplicial complexes on $n$ vertices in many ways. One such method is as follows: fix a top dimension $d$, and now define the random simplicial model $S_d(n,p)$ where each $d$ simplex spanning any $d+1$ vertices exists with probability $p$. Some work has been done investigating the homology of such complexes in limiting cases, see for example this paper.
What is known about the properties of the fundamental group (or higher homotopy groups) of random simplicial complexes?
If there is a good reference, that would be enough. I can not find one on google. Thank you for your time.
-
1
@jc: I would suggest you make your comment an answer, since that paper is the state of the art in the field. – Igor Rivin Jul 8 at 4:42
## 1 Answer
Babson, Hoffman, and Kahle have written a paper on fundamental groups of random 2-complexes. They worked with the Linial-Meshulam model whereby you begin with a complete graph on $n$ vertices and then add independently uniformly random 2-simplices.
Babson has just written a paper on the fundamental groups of clique complexes of Erdős–Rényi random graphs using similar techniques.
-
Thank you for the answer. – Pinying Jul 8 at 16:40
1
If I recall Babson's results, depending on your biases in generation of 2-complexes you get a landscape of results of the form: generically the fundamental group tends to be either trivial, hyperbolic or free. Other types of groups tend to be rare. – Ryan Budney Jul 23 at 2:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8999312520027161, "perplexity_flag": "head"}
|
http://rjlipton.wordpress.com/2009/08/07/fermats-little-theorem-for-matrices/
|
## a personal view of the theory of computation
by
Fermat’s little theorem for matrices with application to potential factoring algorithms
Pierre de Fermat is perhaps best known for the proof that he never wrote down, and perhaps never had.
I have discovered a truly remarkable proof which this margin is too small to contain. (Around 1637)
This of course refers to his famous “Fermat’s Last Theorem,” that was finally proved by Andrew Wiles in 1995. Most doubt whether Fermat really had a proof, but it is an intriguing part of the history of this famous problem. If Fermat had a solution, he certainly did not have the brilliant one that Wiles found.
Today I would like to talk about some new, discovered this century, results that generalize Fermat’s Little Theorem to matrices. They do not seem to be well known, but are quite pretty. Perhaps they may have applications to some complexity theory problems.
While Fermat’s Last Theorem is very hard to prove, the version for polynomials is much easier to resolve. In particular, one can prove that
$\displaystyle a(x)^{m} + b(x)^{m} = c(x)^{m}$
has no solutions in non-constant polynomials, for ${m>2}$. There are many proofs of this fact. Often changing a problem from integers to polynomials makes it easier.
Let’s now turn to study number theory problems for matrices, not for integers nor for polynomials.
Fermat’s Little Theorem
This states,
Theorem: If ${p}$ is a prime and ${m}$ an integer, then ${m^{p} \equiv m \bmod p}$.
There are many proofs of this beautiful theorem. One is based on the following observation:
$\displaystyle (x_{1} + \cdots + x_{m})^{p} \equiv x_{1}^{p} + \cdots + x_{m}^{p} \bmod p.$
The proof of this equation follows from the binomial theorem, and the fact that
$\displaystyle {p \choose k} \equiv 0 \bmod p$
for ${p}$ prime and ${0<k<p}$. Then, simply set all ${x_{i}=1}$, and that yields,
$\displaystyle (1 + \cdots + 1)^{p} \equiv 1 + \cdots + 1 \bmod p,$
which is ${ m^{p} \equiv m \bmod p.}$
Matrices
The brilliant Vladimir Arnold once stated:
There is a general principle that a stupid man can ask such questions to which one hundred wise men would not be able to answer. In accordance with this principle I shall formulate some problems.
Since Arnold is certainly not stupid, he was joking. Yet there is some truth to his statement: it is notoriously easy to raise questions in number theory that sound plausible, hold for many small cases, and yet are impossible to prove. Fermat’s Last Theorem exactly fit this category for over three hundred years.
In any event, Arnold did extensive numerical experiments, to search for a way to generalize Fermat’s Little Theorem to matrices. He noticed, immediately, that simply replacing the integer ${m}$ by a matrix ${A}$ will not work. For example, consider a matrix ${A}$ that is nilpotent, but is not zero. Recall a matrix is nilpotent provided some power of the matrix is ${0}$. Then, for ${p}$ large enough, ${A^{p} = 0}$ and so clearly ${A^{p} \not\equiv A \bmod p}$.
Thus, Arnold was forced to extend the notion of what it meant for a theorem to be “like” Fermat’s Little Theorem. After extensive experiments he made make the following conjecture about the trace of matrices. Recall that the ${\mathsf{trace}(A)}$ is the sum of the elements in the main diagonal of the matrix ${A}$.
Conjecture: Suppose that ${p}$ is a prime and ${A}$ is a square integer matrix. Then, for any natural number ${k \ge 1}$,
$\displaystyle \mathsf{trace}(A^{p^{k}}) \equiv \mathsf{trace}(A^{p^{k-1}}) \bmod p^{k}.$
In his paper, published in 2006, he found an algorithm that could check his conjecture for a fixed size ${d \times d}$ matrix and a fixed prime. He then checked that it was true for many small values of ${d}$ and ${p}$, yet he did not see how to prove the general case. He did prove the special case of ${k=1}$.
Finally, the general result was proved by Alexander Zarelua in 2008 and independently by others:
Theorem: Suppose that ${p}$ is a prime and ${A}$ is a square integer matrix. Then, for any natural number ${k \ge 1}$,
$\displaystyle \mathsf{trace}(A^{p^{k}}) \equiv \mathsf{trace}(A^{p^{k-1}}) \bmod p^{k}.$
An important special case is when ${k=1}$, recall Arnold did prove this case,
$\displaystyle \mathsf{trace}(A^{p}) \equiv \mathsf{trace}(A) \bmod p.$
For an integer matrix ${A}$, the corresponding coefficients of the characteristic polynomials of ${A}$ and ${A^{p}}$ are congruent ${\bmod p}$. This, in effect, generalizes the statement about traces.
Factoring
Arnold’s conjecture—now theorem—has some interesting relationship to the problem of factoring. If we assume that factoring is hard, then his theorem can be used to prove lower bounds on the growth of the traces of matrix powers.
Let’s start with a simple example. I will then show how we can get much more powerful statements from his theorem. Consider a single integer matrix ${A}$, and look at the following simple algorithm where ${n=pq}$ is the product of two primes.
1. Compute ${\alpha = \mathsf{trace}(A^{n}) \bmod n.}$
2. Then, compute the greatest common divisor of ${\alpha-k}$ and ${n}$ for all ${k=1,\dots,\log^{c} n}$ where ${c}$ is a constant.
Clearly, if this finds a factor of ${n}$ it is correct. The only question is when will that happen? Note, by Arnold’s theorem,
$\displaystyle \begin{array}{rcl} \alpha &\equiv& \mathsf{trace}(A^{pq}) \bmod p \\ &\equiv& \mathsf{trace}(A^{q}) \bmod p. \end{array}$
Also,
$\displaystyle \begin{array}{rcl} \alpha &\equiv& \mathsf{trace}(A^{pq}) \bmod q \\ &\equiv& \mathsf{trace}(A^{p}) \bmod q. \end{array}$
The key observation is: suppose that the trace of the powers of the matrix ${A}$ grow slowly. Let ${\alpha \bmod p \neq \alpha \bmod q}$, and let one of these values be small. Then, for some ${k}$ the value ${\alpha-k}$ will be zero modulo, say, ${p}$ and not zero modulo ${q}$. Thus, the gcd computation will work.
All this needs is a matrix so that the trace of its powers grow slowly, and yet are different. I believe that this is impossible, but am not positive.
We can weaken the requirements tremendously. For example, replace one matrix ${A}$ by a family of integer matrices ${A_{1},\dots,A_{k}}$. Then, define ${\alpha(m)}$ as:
$\displaystyle \sum_{i=1}^{k} \lambda_{i}\mathsf{trace}(A_{i}^{m})$
where all ${\lambda_{i}}$ are integers. Note, this value is always an integer.
Now the key is the behavior of the function ${\alpha(m)}$. In order to be able to factor this function must have two properties:
1. There must be many values of ${m}$ so that ${\alpha(m)}$ is small;
2. The values of ${\alpha(p)}$ and ${\alpha(q)}$ should often be distinct.
If these properties are true, then the above method will be a factoring algorithm. Thus, an example of reverse complexity theory (RCT) is: if factoring is hard, then any ${\alpha(m)}$ that is non-constant must not have many small values. This can easily be made into quantitative bounds. I know that some of these results are proved, but their proofs often depend on deep properties from algebraic number theory. The beauty of the connection with factoring is that the proofs are simple—given the strong hypothesis that factoring is hard.
Since I am open to the possibility that factoring is easy, as you probably know, I hope that there may be some way to use these ideas to attack factoring. But either way I hope you like the connection between the behavior of matrix powers and factoring.
Gauss’s Congruence
There are further matrix results that also generalize other theorems of number theory. For example, the following is usually called Gauss’s congruence:
Theorem: For any integer ${a}$ and natural number ${m}$,
$\displaystyle \sum_{d | m} \mu(\frac{m}{d})a^{d} \equiv 0 \bmod m.$
Here ${\mu(n)}$ is the famous Möbius function: if $n=1$, then $\mu(n)=1$; if $n$ is divisible by the square of a prime, then $\mu(n)=0$; if $n$ is divisible by $k$ distinct primes, then $\mu(n)=(-1)^{k}$.
By the way, why does Gauss get everything named after him? I guess we should join in and create a complexity class that is called GC, for “Gauss’s Class.” Any suggestions?
Zarelua proves that Gauss’s congruence generalizes nicely to matrices:
Theorem: For any integer matrix ${A}$ and natural number ${m}$,
$\displaystyle \sum_{d | m} \mu(\frac{m}{d})\mathsf{trace}(A^{d}) \equiv 0 \bmod m.$
As an example, let ${m=pq}$ the product of two primes. then this theorem shows that we can determine,
$\displaystyle \alpha = \mathsf{trace}(A^{p}) + \mathsf{trace}(A^{q}) \bmod pq$
for any integer matrix ${A}$. What intrigues me is, if ${\alpha}$ is less than ${pq}$, then we get the exact value of
$\displaystyle \mathsf{trace}(A^{p}) + \mathsf{trace}(A^{q}).$
Can we do something with this ability? In particular, can it be used to get some information about the values of ${p}$ and ${q}$?
Open Problems
I believe there are two interesting types of questions. The first is what can we say about the growth of such sums of matrix traces? Can we improve on known bounds or give shorter proofs based on the hardness of factoring? This would be a nice example of RCT.
Second, a natural idea is to look at other Diophantine problems and ask what happens if the variables range over matrices? What known theorems remain true, which become false, and which become open?
### Like this:
from → People, Proofs
10 Comments leave one →
1. August 7, 2009 6:53 pm
Some comments:
1. Here is a short proof of the trace result. A is congruent mod p to a matrix with non-negative integer entries, so let it denote the adjacency matrix of a graph. The cyclic group of order $p^k$ acts on closed walks of length $p^k$ in the obvious way, and computing $\text{tr(A^{p^k}) \bmod p^k$ simply means ignoring the orbits of size $p^k$. The remaining orbits must all be obtained by repeating walks of length $p^{k-1}$, whence the result. This generalizes the argument I gave here.
2. In a similar way one obtains a short proof of Gauss’s congruence: by Mobius inversion $\frac{1}{m} \sum_{d | m} \mu \left( \frac{m}{d} \right) \text{tr}(A^d)$ counts the number of aperiodic paths of length $m$.
3. The growth rate of traces of an integer matrix is related to the Mahler measure problem, since if a matrix’s traces grow slowly its characteristic polynomial has small Mahler measure. So this seems like a difficult but interesting avenue of attack. If the conjecture is true then the traces grow at least as fast as $1.1762^k$.
• August 7, 2009 7:00 pm
Whoops, some corrections.
1, 2. The formula that didn’t parse is $\text{tr}(A^{p^k}) \bmod p^k$. I guess what I’m trying to say here is that I am astonished these results aren’t already well-known in the literature. Surely there are references earlier than this decade!
3. Even the truth of the Mahler measure conjecture leaves open the possibility that there are many small roots of absolute value greater than $1$. Based on the known extremisers, however, this seems extremely unlikely.
• rjlipton *
August 7, 2009 11:00 pm
I am sure there are. But these are the ones that I found. In any event these results do not seem to be well known…
2. kk
August 10, 2009 1:11 am
“If Fermat had a solution, he certainly did not have the brilliant one that Wiles found. ”
Quite a stupid statement!
• rjlipton *
August 10, 2009 6:58 am
I guess it is a bit. But what I really meant is that… oh forget it.
3. Le Gu
September 11, 2009 6:49 pm
There is simple lifting of any integer matrix A to some boolean matrix B which adds
only zeros to the spectra: spectra(B) = spectra(A) + {zeroes}. Actually, in some
universal basis B = A \oplus 0.
Thus it is sufficient to consider just {0,1} matrices. Perhaps this observation can simplify things. I really enjoyed the post, your blog is one of my favorites.
• rjlipton *
September 11, 2009 9:06 pm
Thanks very much. Perm(A)=0 is easy for non-negative matrices since reduces to is there a matching?
4. December 4, 2009 10:12 pm
There’s a simple generalization of Fermat’s Little Theorem to matrices:
Let A be a square matrix with non-zero eigenvalues, with A diagnoalizable; let p be prime, and let all residues be taken with respect to the components of the matrix (component-wise).
APWRp-1 = (SDSPWR-1)p-1 = SDPWRp-1SPWR-1 = SISPWR-1 = I (mod p).
Therefore:
APWRp-1 = I (mod p).
(Note: aPWRb mean “a to the power of b”.
### Trackbacks
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 98, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9407074451446533, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/205979/opposite-of-factorial?answertab=oldest
|
# Opposite of Factorial? [duplicate]
Possible Duplicate:
Is there a way to solve for an unknown in a factorial?
I was just wondering, what would be the opposite of factorial?
For example, If I had $n! = 120$. How can I then show algebraically that $n = 5$?
-
2
You could repeatedly divide your number by increasing integers; and if at any point you divide by $k$ and are left with $k+1$ then you know that the number you started with is $(k+1)!$. – Clive Newstead Oct 2 '12 at 13:03
Are you looking for a solution that works when the result is a non-integer? For example, suppose someone asks you for $n$ such that $n! = 200$. Do you want to say "There is no such $n$," or do you want to say "$n$ would have to be between 5 and 6", or do you want to say "$n\approx 5.297$"? – MJD Oct 2 '12 at 13:05
@MJD If possible, I would like to know both methods. – gekkostate Oct 2 '12 at 13:07
– MJD Oct 2 '12 at 13:08
## marked as duplicate by J. M., Ross Millikan, Matt N., Henry T. Horton, SashaOct 3 '12 at 2:54
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
## 1 Answer
[Added because of a question in a comment] The generalization of the factorial is the gamma-function: $n! = \Gamma(1+n)$ where we can also insert noninteger values for n: $y = \Gamma(z)$ such that we have a function over the complex numbers $z$ except the poles at the non-positive integers).[/added]
The gamma-function has two real fixpoints. If you write the power-series of the gamma around one of that fixpoints, then this power series has no constant term and can be reverted by series-reversion. From this you can then get the inverse of the gamma, and from this the inverse of the factorial. Unfortunately, the convergence-radii of that series are both small, so I cant say at the moment, how useful this process would actually be.
(I think I've seen a question concerning the inverse of the gamma here or on MO, and possibly even showed a couple of that coefficients: see here for a short discussion)
-
I don't know what the gamma function is.. Can you please offer an explanation to what this means? – gekkostate Oct 2 '12 at 13:12
1
@gekkostate: you could start at Wikipedia or Mathworld – Ross Millikan Oct 2 '12 at 13:35
– Ross Millikan Oct 2 '12 at 13:39
– Shahab Oct 2 '12 at 13:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9489821791648865, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-algebra/176853-resolving-vector-into-sum-two-other-vectors-using-dot-products.html
|
# Thread:
1. ## Resolving a vector into the sum of two other vectors (using dot products)
Hi, I'm having trouble with a linear algebra question and would really appreciate some help.
resolve the vector u = 5i + j + 6k into a sum of two vectors, one of which is parallel to and the other perpendicular to v = 3i - 6j + 2k
Thanks!
2. $\displaystyle<br /> U_\parallel = \frac{{U \cdot V}}<br /> {{V \cdot V}}V\;\& \,U_ \bot = U - U_\parallel$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9547449350357056, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-algebra/196430-find-minimal-polynomial-2-over-q-3-a-print.html
|
# Find the minimal polynomial for a = π^2 over Q(π^3)
Printable View
• March 26th 2012, 10:11 AM
lteece89
Find the minimal polynomial for a = π^2 over Q(π^3)
If it were in Q then the polynomial would be p(x) = x - π^2, but I don't know if being over Q(π^3) changes that. I think that when you compare the fields you get degree 1, but again, I'm not completely sure.
• March 26th 2012, 10:33 AM
Sylvia104
Re: Find the minimal polynomial for a=π^2 over Q(π^3)
If the extension is $\mathbb Q(\pi)/\mathbb Q\left(\pi^3\right)$ the minimal polynomial is $x^3-\pi^6.$
• March 26th 2012, 11:54 AM
lteece89
Re: Find the minimal polynomial for a = π^2 over Q(π^3)
Is this because the minimal polynomial needs to be degree three?
• March 26th 2012, 01:40 PM
Sylvia104
Re: Find the minimal polynomial for a=π^2 over Q(π^3)
The minimal polynomial here is the monic polynomial $f(x)$ of minimal degree with coefficients in $\mathbb Q\left(\pi^3\right)$ such that $f\left(\pi^2\right)=0.$ If $f(x)$ has degree $1$ or $2,$ $f\left(\pi^2\right)$ is of the form $a_0+\pi^2$ or $a_0+a_1\pi^2+\pi^4,$ where $a_0,a_1\in\mathbb Q\left(\pi^3\right);$ clearly these cannot be $0.$
All times are GMT -8. The time now is 12:57 PM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9166180491447449, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/156104/limit-of-a-function-tending-to-zero
|
# Limit of a function tending to zero.
If $F(t)$ is twice differentiable at $x$ and $$G(h)=\max_{t\in(0,h)}\left[\frac{F'(x+t)-F'(x-t)}{2t}-F''(x)\right],$$ where $x$ is fixed; then how can we show that $\displaystyle\lim_{h\to 0}G(h)=0$.
-
There are abstract duplicates of this question here and here. – Zev Chonoles♦ Jun 9 '12 at 15:48
## 4 Answers
Hint: $$\frac{F'(x+t)-F'(x-t)}{t}=\frac{F'(x+t)-F'(x)}{t}+\frac{F'(x-t)-F'(x)}{-t}.$$
-
@Thomas: Indeed very useful! I started with the obvious one, then tried to give a little more. – André Nicolas Jun 9 '12 at 15:57
You removed the $t$, I removed my comment. A pity, this equation would have allowed me to show (no, I won't tell ;-) – user20266 Jun 9 '12 at 15:59
@Kns: I was being too indirect. Have changed the hint, added primes. Note that each part is related to the derivative of $F'$. – André Nicolas Jun 9 '12 at 16:05
Thanks a lot Andre! – Kns Jun 9 '12 at 16:17
Hint: By definition
$$F''(x) = \lim_{h\to 0} \frac{F'(x+h)-F'(x)}{h}$$
Couple this with Andre's comment
-
Since $\,t\in(,h)\,$ , we have that $\,h\to 0\Longrightarrow t\to 0\,$ , so: $$\lim_{t\to 0}\frac{F'(x+t)-F'(x-t)}{2t}=\lim_{t\to 0}\frac{1}{2}\left[\frac{F'(x+t)-F'(x)}{t}+\frac{F'(x-t)-F'(x)}{-t}\right]$$ and you get what you want since we know $\,F''(x)\,$ exists, so the limit defining this second derivative exists.
-
Since $h\to0$ means $t\to0$, so $$\displaystyle\lim_{h\to 0}G(h) = \lim_{t\to 0}\left[\frac{F'(x+t)-F'(x-t)}{2t}-F''(x)\right]$$ apply l'Hôpital's rule,we can get $$\lim_{t\to 0}\left[\frac{F'(x+t)-F'(x-t)}{2t}-F''(x)\right] = \lim_{t \to 0}\left[\frac{F''(x+t)+F''(x-t)}{2}-F''(x)\right] = 0$$
ps: your tags include real analysis, I assume $x\in \mathbf{R^{n}}$, although real analysis isn't only about real numbers. I dou't know if l'Hôpital's rule can apply under other situation.
-
(This was my first intent, too: L'Hospital) I've a doubt here: in the RHS the $\,t's\,$ disappeared while you still are taking the limit when $\,t\to 0\,$ , but this is equivalent to plug $\,t=0\,$ after applying L'Hospital, which is justified if $\,F''\,$ is continuous in $\,x\,$, something we can't know... – DonAntonio Jun 9 '12 at 16:27
If I recall correctly, L'Hospital does not require the derivative to be continuous(namely $F''$ here), the existent of the derivative in the area is enough – haohaolee Jun 9 '12 at 17:05
Indeed it doesn't, but you require it to put $$\lim_{t\to 0}\frac{F''(x+t)+F''(x-t)}{2}=\frac{F''(x)+F''(x)}{2}$$which is what in fact you did, or, of course, justify otherwise this equality. – DonAntonio Jun 9 '12 at 17:13
oh, you are right, my fault, thanks. edit it. And then I get what you meant.... thinking now – haohaolee Jun 9 '12 at 17:18
1
seems it is not appropriate here. Thanks for the reminding – haohaolee Jun 9 '12 at 17:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9338977336883545, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/chern-simons-theory
|
# Tagged Questions
Chern-Simons theory is an example of a topological quantum field theory. Its describes the field dynamics through the so-called Chern-Simons-form, hence its name.
0answers
42 views
### Supersymmetric Chern-Simons theories in $d=3$
I am reading up on Chern-Simons matter theories in $d=3$. Here is the quote (from http://thesis.library.caltech.edu/7111 page 15) that I am having trouble with: One could also add a supersymmetric ...
2answers
348 views
### Gauge invariance and diffeomorphism invariance in Chern-Simons theory
I have studied Chern-Simons (CS) theory somewhat and I am puzzled by the question of how diff. and gauge invariance in CS theory are related, e.g. in $SU(2)$ CS theory. In particular, I would like to ...
2answers
196 views
### Chern-Simons degrees of freedom
I'm currently reading the paper http://arxiv.org/abs/hep-th/9405171 by Banados. I am just getting acquainted with the details of Chern-Simons theory, and I'm hoping that someone can explain/elaborate ...
0answers
100 views
### Reference on Chern-Simons theory
I have recently been trying to refresh my memory on the Quantum Field Theory I learned 25 years ago while getting my Ph. D. At the time I did not study Chern-Simons modifications to QFT Lagrangians. ...
2answers
228 views
### Chern-Simons term
In the literature I can only find Chern-Simons terms ( i.e. for a 3-dimensional manifold $A \wedge dA + A \wedge A \wedge A$) for odd-dimensional manifolds. Why can't I write such forms for ...
1answer
183 views
### Understanding Cherns-Simons-Witten Theory
I want to read about Wittens work, on Cherns-Simons theory, and relations to knots and jones polynomials. I am extremely motivated to read his paper: Quantum Field Theory and Jones polynomial. What ...
0answers
93 views
### What is non-Abelian about non-Abelian Chern-Simons' theory?
One is aware that in the axial gauge (say the light-cone gauge $A_{-}=0$) non-supersymmetric Chern-Simons' theory is a quadratic theory. Hence in this gauge there are no gauge-gauge interactions. Then ...
0answers
161 views
### About the gauge invariance of Chern-Simons' theory (in local coordinates)
I am aware of the differential form language proof of the fact that for arbitrary gauge transformations the Chern-Simons' term shifts by a WZW term (on the boundary). But I am getting confused if ...
2answers
311 views
### Path integral and geometric quantization
I was wondering how one obtains geometric quantization from a path integral. It's often assumed that something like this is possible, for example, when working with Chern-Simons theory, but rarely ...
1answer
188 views
### 't Hooft limit of coupling fundamental fermions to Chern-Simons theory
This question is in reference to this paper: arXiv:1110.4386 [hep-th]. I would like to know what is the derivation or a reference to the proof of their crucial equation 2.3 (page 12). In their ...
0answers
78 views
### Some questions about flavour and R-symmetry in $2+1$ ${\cal N}=3$ theory
I have heard this fact that for ${\cal N}=3$ theories in $2+1$ with $N_f$ ${\cal N}=3$ matter fields the flavour symmetry group is $USp(N_f)$, $U(N_f)$ or $SO(2N_f)$ depending on whether the gauge ...
1answer
150 views
### Pedagogic reference for calculation of 2-loop anomalous dimension (supersymmetric)
I want to know of pedagogic references which teach how to compute anomalous dimensions (..wave-function renormalization..) at lets say 2-loops. I guess there might be specialized techniques for ...
0answers
164 views
### The ${\cal N} = 3$ Chern-Simons matter lagrangian
This question is sort of a continuation of this previous question of mine. I would like to know of some further details about the Lagrangian discussed in this paper in equation 2.8 (page 7) and in ...
1answer
168 views
### Integrating over a gauge field in the field integral formalism
I'm currently trying to study a chapter in Altland & Simons, "Condensed Matter Field Theory" (2nd edition) and I'm stuck at the end of section 9.5.2, page 579. Given the euclidean Chern-Simons ...
2answers
131 views
### Wilson Loops in Chern-Simons theory with non-compact gauge groups
VEVs of Wilson loops in Chern-Simons theory with compact gauge groups give us colored Jones, HOMFLY and Kauffman polynomials. I have not seen the computation for Wilson loops in Chern-Simons theory ...
1answer
127 views
### Normalization of the Chern-Simons level in $SO(N)$ gauge theory
In a 3d SU(N) gauge theory with action $\frac{k}{4\pi} \int \mathrm{Tr} (A \wedge dA + \frac{2}{3} A \wedge A \wedge A)$, where the generators are normalized to \$\mathrm{Tr}(T^a ...
1answer
141 views
### Chern-Simons theory
In Witten's paper on QFT and the Jones polynomial, he quantizes the Chern-Simons Lagrangian on $\Sigma\times \mathbb{R}^1$ for two case: (1) $\Sigma$ has no marked points (i.e., no Wilson loops) and ...
1answer
131 views
### Models of higher Chern-Simons type
It has long been clear that (the action functional of) Chern-Simons theory has various higher analogs and variations of interest. This includes of course traditional higher dimensional Chern-Simons ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.903201162815094, "perplexity_flag": "middle"}
|
http://www.nag.com/numeric/CL/nagdoc_cl23/html/G04/g04bcc.html
|
# NAG Library Function Documentnag_anova_row_col (g04bcc)
## 1 Purpose
nag_anova_row_col (g04bcc) computes the analysis of variance for a general row and column design together with the treatment means and standard errors.
## 2 Specification
#include <nag.h>
#include <nagg04.h>
void nag_anova_row_col (Integer nrep, Integer nrow, Integer ncol, const double y[], Integer nt, const Integer it[], double *gmean, double tmean[], double table[], double c[], Integer tdc, Integer irep[], double rpmean[], double rmean[], double cmean[], double r[], double ef[], double tol, Integer irdf, NagError *fail)
## 3 Description
In a row and column design the experimental material can be characterized by a two-way classification, nominally called rows and columns. Each experimental unit can be considered as being located in a particular row and column. It is assumed that all rows are of the same length and all columns are of the same length. Sets of equal numbers of rows and columns can be grouped together to form replicates, sometimes known as squares or rectangles, as appropriate.
If for a replicate, the number of rows, the number of columns and the number of treatments are equal and every treatment occurs once in each row and each column then the design is a Latin square. If this is not the case the treatments will be non-orthogonal to rows and columns. For example in the case of a lattice square each treatment occurs only once in each square.
For a row and column design, with $t$ treatments in $r$ rows and $c$ columns and $b$ replicates or squares with $n=brc$ observations, the linear model is:
$y ijk l = μ + β i + ρ j + γ k + τ l + e ijk$
$i=1,2,\dots ,b\text{; }j=1,2,\dots ,r\text{;}k=1,2,\dots ,c\text{; }l=1,2,\dots ,t$, where ${\beta }_{i}$ is the effect of the $i$th replicate, ${\rho }_{j}$ is the effect of the $j$th row, ${\gamma }_{k}$ is the effect of the $k$th column and the $ijk\left(l\right)$ notation indicates that the $l$th treatment is applied to the unit in row $j$, column $k$ of replicate $i$.
To compute the analysis of variance for a row and column design the mean is computed and subtracted from the observations to give, ${y}_{ijk\left(l\right)}^{\prime }={y}_{ijk\left(l\right)}-\stackrel{^}{\mu }$. Since the replicates, rows and columns are orthogonal the estimated effects, ignoring treatment effects, ${\stackrel{^}{\beta }}_{i}$, ${\stackrel{^}{\rho }}_{j}$, ${\stackrel{^}{\gamma }}_{k}$, can be computed using the appropriate means of the ${y}_{ijk\left(l\right)}^{\prime }$, and the unadjusted sum of squares computed as the appropriate sum of squared totals for the ${y}_{ijk\left(l\right)}^{\prime }$ divided by number of units per total. The observations adjusted for replicates, rows and columns can then be computed by subtracting the estimated effects from ${y}_{ijk\left(l\right)}^{\prime }$ to give ${y}_{ijk\left(l\right)}^{\prime \prime }$.
In the case of a Latin square design the treatments are orthogonal to replicates, rows and columns and so the treatment effects, ${\stackrel{^}{\tau }}_{l}$, can be estimated as the treatment means of the adjusted observations, ${y}_{ijk\left(l\right)}^{\prime \prime }$. The treatment sum of squares is computed as the sum of squared treatment totals of the ${y}_{ij\left(l\right)}^{\prime \prime }$ divided by the number of times each treatment is replicated. Finally the residuals, and hence the residual sum of squares, are given by, ${r}_{ij\left(l\right)}={y}_{ij\left(l\right)}^{\prime \prime }-{\stackrel{^}{\tau }}_{l}$.
For a design which is not orthogonal, for example a lattice square or an incomplete Latin square, the treatment effects adjusted for replicates, rows and columns need to be computed. The adjusted treatment effects are found as the solution to the equations:
$A τ ^ = R-N b NbT / rc - N r NrT / bc - N c NcT / br τ ^ = q$
where $q$ is the vector of the treatment totals of the observations adjusted for replicates, rows and columns, ${y}_{ijk\left(l\right)}^{\prime \prime }$; $R$ is a diagonal matrix with ${R}_{ll}$ equal to the number of times the $l$th treatment is replicated, and ${N}_{b}$ is the $t$ by $b$ incidence matrix, with ${N}_{l,i}$ equal to the number of times treatment $l$ occurs in replicate $i$, with ${N}_{r}$ and ${N}_{c}$ being similarly defined for rows and columns. The solution to the equations can be written as:
$τ ^ = Ω q$
where, $\Omega $ is a generalized inverse of $A$. The solution is found from the eigenvalue decomposition of $A$. The residuals are first calculated by subtracting the estimated adjusted treatment effects from the adjusted observations to give ${r}_{ij\left(l\right)}^{\prime }={y}_{ij\left(l\right)}^{\prime \prime }-{\stackrel{^}{\tau }}_{l}$. However, since only the unadjusted replicate, row and column effects have been removed and they are not orthogonal to treatments, the replicate, row and column means of the ${r}_{ij\left(l\right)}^{\prime }$ have to be subtracted to give the correct residuals, ${r}_{ij\left(l\right)}$ and residual sum of squares.
Given the sums of squares, the mean squares are computed as the sums of squares divided by the degrees of freedom. The degrees of freedom for the unadjusted replicates, rows and columns are $b-1$, $r-1$ and $c-1$ respectively and for the Latin square designs the degrees of freedom for the treatments is $t-1$. In the general case the degrees of freedom for treatments is the rank of the matrix $\Omega $. The $F$-statistic given by the ratio of the treatment mean square to the residual mean square tests the hypothesis:
$H 0 : τ 1 = τ 2 = ⋯ = τ t = 0 .$
The standard errors for the difference in treatment effects, or treatment means, for Latin square designs, are given by:
$se τ ^ j - τ ^ j* = 2 s 2 / bt$
where ${s}^{2}$ is the residual mean square. In the general case the variances of the treatment effects are given by:
$Var τ ^ = Ω s 2$
from which the appropriate standard errors of the difference between treatment effects or the difference between adjusted means can be calculated.
The analysis of a row-column design can be considered as consisting of different strata: the replicate stratum, the rows within replicate and the columns within replicate strata and the units stratum. In the Latin square design all the information on the treatment effects is given at the units stratum. In other designs there may be a loss of information due to the non-orthogonality of treatments and replicates, rows and columns and information on treatments may be available in higher strata. The efficiency of the estimation at the units stratum is given by the (canonical) efficiency factors, these are the nonzero eigenvalues of the matrix, $A$, divided by the number of replicates in the case of equal replication, or by the mean of the number of replicates in the unequally replicated case, (see John (1987)). If more than one eigenvalue is zero then the design is said to be disconnected and information on some treatment comparisons can only be obtained from higher strata.
## 4 References
Cochran W G and Cox G M (1957) Experimental Designs Wiley
Davis O L (1978) The Design and Analysis of Industrial Experiments Longman
John J A (1987) Cyclic Designs Chapman and Hall
John J A and Quenouille M H (1977) Experiments: Design and Analysis Griffin
Searle S R (1971) Linear Models Wiley
## 5 Arguments
1: nrep – IntegerInput
On entry: the number of replicates, $b$.
Constraint: ${\mathbf{nrep}}\ge 1$.
2: nrow – IntegerInput
On entry: the number of rows per replicate, $r$.
Constraint: ${\mathbf{nrow}}\ge 2$.
3: ncol – IntegerInput
On entry: the number of columns per replicate, $c$.
Constraint: ${\mathbf{ncol}}\ge 2$.
4: y[${\mathbf{nrep}}×{\mathbf{nrow}}×{\mathbf{ncol}}$] – const doubleInput
On entry: the $n=brc$ observations ordered by columns within rows within replicates. That is ${\mathbf{y}}\left[rc\left(i-1\right)+r\left(j-1\right)+k-1\right]$ contains the observation from the $k$ column of the $j$th row of the $i$th replicate, $i=1,2,\dots ,b\text{; }j=1,2,\dots ,r$ and $k=1,2,\dots ,c$.
5: nt – IntegerInput
On entry: the number of treatments. If only replicates, rows and columns are required in the analysis then set ${\mathbf{nt}}=1$.
Constraint: ${\mathbf{nt}}\ge 1$.
6: it[$×$] – const IntegerInput
On entry: if ${\mathbf{nt}}>1$, ${\mathbf{it}}\left[i-1\right]$ indicates which of the nt treatments unit $i$ received, $i=1,2,\dots ,n$. If ${\mathbf{nt}}=1$, it is not referenced.
Constraint: if ${\mathbf{nt}}\ge 2$, $1\le {\mathbf{it}}\left[\mathit{i}-1\right]\le {\mathbf{nt}}$, for $\mathit{i}=1,2,\dots ,n$.
7: gmean – double *Output
On exit: the grand mean, $\stackrel{^}{\mu }$.
8: tmean[nt] – doubleOutput
On exit: if ${\mathbf{nt}}\ge 2$, ${\mathbf{tmean}}\left[l-1\right]$ contains the (adjusted) mean for the $l$th treatment, ${\stackrel{^}{\mu }}^{*}+{\stackrel{^}{\tau }}_{l}$, $l=1,2,\dots ,t$, where ${\stackrel{^}{\mu }}^{*}$ is the mean of the treatment adjusted observations ${y}_{ijk\left(l\right)}-{\stackrel{^}{\tau }}_{l}$. Otherwise tmean is not referenced.
9: table[$6×5$] – doubleOutput
Note: the $\left(i,j\right)$th element of the matrix is stored in ${\mathbf{table}}\left[\left(i-1\right)×5+j-1\right]$.
On exit: the analysis of variance table. Column 1 contains the degrees of freedom, column 2 the sum of squares, and where appropriate, column 3 the mean squares, column 4 the $F$-statistic and column 5 the significance level of the $F$-statistic. Row 1 is for replicates, row 2 for rows, row 3 for columns, row 4 for treatments (if ${\mathbf{nt}}>1$), row 5 for residual and row 6 for total. Mean squares are computed for all but the total row, $F$-statistics and significance are computed for treatments, replicates, rows and columns. Any unfilled cells are set to zero.
10: c[${\mathbf{nt}}×{\mathbf{tdc}}$] – doubleOutput
On exit: the upper triangular part of c contains the variance-covariance matrix of the treatment effects, the strictly lower triangular part contains the standard errors of the difference between two treatment effects (means), i.e., ${\mathbf{c}}\left[\left(i-1\right)×{\mathbf{tdc}}+j-1\right]$ contains the covariance of treatment $i$ and $j$ if $j\ge i$ and the standard error of the difference between treatment $i$ and $j$ if $j<i$, $i=1,2,\dots ,t$ and $j=1,2,\dots ,t$.
11: tdc – IntegerInput
On entry: the stride separating matrix column elements in the array c.
Constraint: ${\mathbf{tdc}}\ge {\mathbf{nt}}$.
12: irep[nt] – IntegerOutput
On exit: if ${\mathbf{nt}}>1$, ${\mathbf{irep}}\left[l-1\right]$ contains the treatment replications, ${R}_{ll}$, $l=1,2,\dots ,{\mathbf{nt}}$. Otherwise irep is not referenced.
13: rpmean[nrep] – doubleOutput
On exit: if ${\mathbf{nrep}}>1$, ${\mathbf{rpmean}}\left[i-1\right]$ contains the mean for the $i$th replicate, $\stackrel{^}{\mu }+{\stackrel{^}{\beta }}_{i}$, $i=1,2,\dots ,b$. Otherwise rpmean is not referenced.
14: rmean[${\mathbf{nrep}}×{\mathbf{nrow}}$] – doubleOutput
On exit: ${\mathbf{rmean}}\left[j-1\right]$ contains the mean for the $j$th row, $\stackrel{^}{\mu }+{\stackrel{^}{\rho }}_{i}$, $j=1,2,\dots ,r$.
15: cmean[${\mathbf{nrep}}×{\mathbf{ncol}}$] – doubleOutput
On exit: ${\mathbf{cmean}}\left[k-1\right]$ contains the mean for the $k$th column, $\stackrel{^}{\mu }+{\stackrel{^}{\gamma }}_{k}$, $k=1,2,\dots ,c$.
16: r[${\mathbf{nrep}}×{\mathbf{nrow}}×{\mathbf{ncol}}$] – doubleOutput
On exit: ${\mathbf{r}}\left[i-1\right]$ contains the residuals, ${r}_{i}$, $i=1,2,\dots ,n$.
17: ef[nt] – doubleOutput
On exit: if ${\mathbf{nt}}\ge 2$, the canonical efficiency factors. Otherwise ef is not referenced.
18: tol – doubleInput
On entry: the tolerance value used to check for zero eigenvalues of the matrix $\Omega $. If ${\mathbf{tol}}=0.0$ a default value of 0.00001 is used.
Constraint: ${\mathbf{tol}}\ge 0.0$.
19: irdf – IntegerInput
On entry: an adjustment to the degrees of freedom for the residual and total.
${\mathbf{irdf}}\ge 1$
The degrees of freedom for the total is set to $n-{\mathbf{irdf}}$ and the residual degrees of freedom adjusted accordingly.
${\mathbf{irdf}}=0$
The total degrees of freedom for the total is set to $n-1$, as usual.
Constraint: ${\mathbf{irdf}}\ge 0$.
20: fail – NagError *Input/Output
The NAG error argument (see Section 3.6 in the Essential Introduction).
## 6 Error Indicators and Warnings
NE_2_INT_ARG_LT
On entry, ${\mathbf{tdc}}=〈\mathit{\text{value}}〉$ while ${\mathbf{nt}}=〈\mathit{\text{value}}〉$. These arguments must satisfy ${\mathbf{tdc}}\ge {\mathbf{nt}}$.
NE_ALLOC_FAIL
Dynamic memory allocation failed.
NE_ARRAY_CONS
The contents of array it are not valid.
Constraint: if ${\mathbf{nt}}\ge 2$, $1\le {\mathbf{it}}\left[i\right]\le {\mathbf{nt}}$, for $i=0,1,2,\dots ,{\mathbf{nrep}}×{\mathbf{nrow}}×{\mathbf{ncol}}$.
The contents of array it are not valid.
Constraint: some value of ${\mathbf{it}}=j$ for all $j=1,2$,$\dots $,nt.
NE_ARRAY_CONSTANT
On entry, the elements of the array y are constant.
NE_G04BC_DISCON
The design is disconnected, the standard errors may not be valid. The design may have a nested structure.
NE_G04BC_REPS
The treatments are totally confounded with replicates, rows and columns, so the treatment sum of squares and degrees of freedom are zero. The analysis of variance table is not computed, except for replicate, row, column, total sum of squares and degrees of freedom.
NE_G04BC_RESD
The residual degrees of freedom or the residual sum of squares are zero, columns 3, 4 and 5 of the analysis of variance table will not be computed and the matrix of standard errors and covariances, c, will not be scaled.
NE_G04BC_ST_ERR
A computed standard error is zero due to rounding errors, or the eigenvalue computation failed to converge. Both are unlikely errors.
NE_INT_ARG_LT
On entry, ${\mathbf{irdf}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{irdf}}\ge 0$.
On entry, ${\mathbf{ncol}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{ncol}}\ge 2$.
On entry, ${\mathbf{nrep}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{nrep}}\ge 1$.
On entry, ${\mathbf{nrow}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{nrow}}\ge 2$.
On entry, ${\mathbf{nt}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{nt}}\ge 1$.
NE_INTERNAL_ERROR
An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance.
NE_REAL_ARG_LT
On entry, tol must not be less than 0.0: ${\mathbf{tol}}=〈\mathit{\text{value}}〉$.
## 7 Accuracy
The algorithm used in nag_anova_row_col (g04bcc), described in Section 3, achieves greater accuracy than the traditional algorithms based on the subtraction of sums of squares.
## 8 Further Comments
To estimate missing values the Healy and Westmacott procedure or its derivatives may be used (see John and Quenouille (1977)). This is an iterative procedure in which estimates of the missing values are adjusted by subtracting the corresponding values of the residuals. The new estimates are then used in the analysis of variance. This process is repeated until convergence. A suitable initial value may be the grand mean. When using this procedure irdf should be set to the number of missing values plus one to obtain the correct degrees of freedom for the residual sum of squares.
For analysis of covariance the residuals are obtained from an analysis of variance of both the response variable and the covariates. The residuals from the response variable are then regressed on the residuals from the covariates using, say, nag_regress_confid_interval (g02cbc) or nag_regsn_mult_linear (g02dac). The results from those functions can be used to test for the significance of the covariates. To test the significance of the treatment effects after fitting the covariate, the residual sum of squares from the regression should be compared with the residual sum of squares obtained from the equivalent regression but using the residuals from fitting replicates, rows and columns only.
## 9 Example
The data for a $5×5$ Latin square is input and the ANOVA and treatment means computed and printed. Since the design is orthogonal only one standard error need be printed
### 9.1 Program Text
Program Text (g04bcce.c)
### 9.2 Program Data
Program Data (g04bcce.d)
### 9.3 Program Results
Program Results (g04bcce.r)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 166, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8225975036621094, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/6946/what-does-it-mean-to-say-that-mass-approaches-infinity?answertab=oldest
|
# What does it mean to say that mass “approaches infinity”?
What does it mean to say that mass "approaches infinity"?
I have read that mass of a body increases with the speed and when the body reaches the speed of light, the mass becomes infinity.
What exactly does it mean to say that the mass "approaches infinity" or "becomes infinity"? I am not able to get a picture of "infinite mass" in my mind.
-
10
Why does this question receive so much attention, and why do people give essentially the same answer over and over! – MBN Mar 15 '11 at 16:22
1
Everything was said, but not by everybody. The credo of people too lazy to read the thread before writing. – Georg Mar 15 '11 at 20:14
1
How can a question that generates so many answers get a negative vote? – Carl Brannen Mar 16 '11 at 2:11
4
-1. @Vinoth this question is ill-formed, to say the least. You provide no context or motivation or some idea of your background so one knows how to respond and at what level. @Carl this is the sort of question that, because it is ill-defined, becomes a Rorschach-test with people providing answers for what they interpret the question to be. Is this really the standard we want to set for questions on this site? – user346 Mar 16 '11 at 2:56
6
– Heidar Mar 24 '11 at 23:56
show 2 more comments
## 6 Answers
The answers given so far are fine, but to my surprise nobody's mentioned the most important point: in modern terminology, we generally don't say that the mass of an object increases with speed. "Relativistic mass increase" is outdated terminology, not used by most physicists anymore. In general, nowadays, "mass" means "rest mass" and is independent of velocity. Igor Ivanov's answer to this question says it all.
I haven't read the article by Lev Okun that he refers to, but I like the term "pedagogical virus" for this notion.
-
It means that its inertia (resistance to change in the state of motion) approaches infinity. You probably already know from $F=ma$ that for the same change in speed (acceleration), a larger mass requires a larger force. As the velocity of a body approaches the speed of light, its inertia (i.e. $\gamma m$) becomes so high that you need an impossibly infinite force to accelerate it to exactly the speed of light.
And as Georg says, whenever you see "becomes infinite", read it as "approaches infinity".
-
If you want to imagine an infinite mass just think of a piece of matter which does not response (i.e. accelerate) to any force. Obviously, it is an ideal concept. In reality "the mass approaching infinity" means it is increasing without any finite upper limit.
-
As the speed of a body approaches the speed of light, it becomes harder and harder to accelerate body. This fact is captured by following formula.
$$f(v) = \frac{m_0}{\sqrt{1-\frac{v^2}{c^2}}}$$
When a physicist says that,
when speed of a body equals the speed of light its mass becomes infinite.
He is not saying $f(v = c)$ but $\lim_{v\to c}f(v) = \infty$
You might want to read about limit.
-
It means that if a finite force acts on the mass in its stationary frame, then in all other frames as the measured velocity approaches c, the acceleration approaches 0 which implies the "relativistic mass", the "apparent mass" the "effective mass", call it what you will, approaches infinity.
-
The mass will never be infinite because it would require infinite work to reach required speed. It's just a figure of speech.
Instead of trying to imagine "infinite mass" try imagining the process: as you apply force ("press the accelerator pedal") the object instead of going faster, gets heavier. The more work you provide, the heavier the object gets.
-
Wasn't there enogh written on "rechurners" in the comments? -1 – Georg Mar 25 '11 at 13:01
@Georg: Where in the other comments was work-mass correlation mentioned? It is deceivingly easy to think "something suddenly becomes infinitely heavy" and ponder how that happens. If every gram of weight gained means 90 terajoules of energy expended, it opens eyes on cost of "going to infinity". – SF. Mar 25 '11 at 13:44
So on top You did not understand the question at all. – Georg Mar 25 '11 at 14:04
If that is so, then I still don't. Care to explain? – SF. Mar 25 '11 at 15:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9494038820266724, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/31326/is-a-hard-drive-heavier-when-it-is-full/31700
|
# Is a hard drive heavier when it is full?
Browsing Quora, I saw the following question with contradicting answers.
For the highest voted answer:
The bits are represented by certain orientations of magnetic fields which shouldn't have any effect on gravitational mass.
But, another answer contradicts that one:
Most importantly, higher information content correlates with a more energetic configuration and this is true regardless of the particular type of storage... Now, as per Einstein's most famous formula, energy is equivalent to mass.
Which answer is correct?
-
Poetic licence should be read as implied when interpreting the word "full". The disk doesn't know when it's "full", and it might be full of zeroes or full of random numbers - close to opposites when you're accounting for energy. – Bernd Jendrissek Jul 10 '12 at 1:35
Here's my own addition to the question: When magnetic domains are all aligned similarly -- all zeros or all ones in most (but not all) possible magnetic media coding schemes -- you begin to get a non-trivial external magnetic field. In effect, the disk becomes a weak permanent magnet. It takes energy to create such fields, and such external fields simply would not exist at any distance for random bits. So, has this external energy field been taken fully into account in the estimates of the energy states of a magnetic disc that has only one state -- that is, that encodes no data? – Terry Bollinger Jul 11 '12 at 3:31
Although neither a disk of all 1s or a disk of all 0s contains anything what we'd call useful information. – Shadur Mar 4 at 15:05
## 7 Answers
I wrote a blog post about this some time ago. The answer is yes, but by a tiny amount that you would never be able to measure: something like $10^{-14}\text{ g}$ (roughly) for a typical ~1TB hard drive.
That value comes from the formula for the potential energy of a pair of magnetic dipoles,
$$E = \frac{\mu_0}{4\pi}\frac{\mu_1 \mu_2 \cos\theta}{r^3}$$
In my post, I estimate that a hard drive might contain $10^{23}$ electrons total, split into $10^{12}$ magnetic domains which are spaced around $0.1\ \mathrm{\mu m}$ apart. That means the magnetic moment of each of these domains is $10^{11}\mu_B$, with $\mu_B = \frac{e\hbar}{2m_e}$ being the Bohr magneton. If you plug this into the formula above, and multiply by 4 under the assumption that each magnetic domain interacts with 4 nearest neighbors, you wind up finding that the total energy is no more than $5\text{ J}$, depending on the value of $\cos\theta$. That corresponds, via $E = mc^2$, to an equivalent mass of around $10^{-14}\text{ g}$.
Admittedly all of these numbers are rough order-of-magnitude estimates, and there are various other effects that contribute little bits to the energy, but any corrections aren't going to shift this by more than a couple of orders of magnitude one way or another. Given that the equivalent mass of the energy stored in the magnets is a full 17 orders of magnitude less than the mass of the hard drive itself, it's safe to say that the difference is undetectable.
Incidentally, I also tried out the equivalent calculation for flash memory in another blog post.
-
Good answer, but this doesn't yet address how you expect the dipole configuration and its associated energy content to change based on the amount of information stored in the drive (+1 anyway). – kleingordon Jul 5 '12 at 0:45
Yeah, I intentionally didn't address that in detail. It could vary somewhere between $+5\text{ J}$ (for an anti-aligned grid) and $-5\text{ J}$ (for completely uniform magnetism). The main point I want to make is that no matter what assumptions you make, the weight difference is too small to measure. – David Zaslavsky♦ Jul 5 '12 at 0:52
3
@DavidZaslavsky: The issue is that the randomly aligned dipoles have less magnetic field than aligned dipoles, so that the sign of the correction might go the other way--- a full hard drive would way less from the reduced dipole interaction strength. Writing 0101010101 would be energetically optimal then. But this argument is confounded by the domain surface energy, which goes the other way. – Ron Maimon Jul 5 '12 at 2:22
@DavidZaslavsky: Please fix two things: 1. it is technically valid to use E=mc^2 in this way, the mass of the hard drive is the equal energy in the hard drive (the hard drive isn't moving, so there isn't even an ambiguity regarding relativistic mass, so it's just true). 2. The calculation you did is wrong--- the dipole dipole interaction gives a different sign from what OP suggested, and domain energy (which is comparable when the hard drive domains are the same as the natural domain structure in the equilibrated drive) goes the way OP says. – Ron Maimon Jul 5 '12 at 14:44
@DavidZaslavsky, I think RonMaimon is correct on this. Think of the energy required to force domains into alignment in a neodymium magnet, and how brittle the captured energy makes the magnet towards fracturing. I'm pretty sure that the lowest energy configuration is the one with equal alternating 1s and 0s, and the highest is all 0s or all 1s. Of course, in real disks they don't actually reset to all zeros at all, and wipe programs usually use random patterns. But nonetheless, reducing the disk to one giant bit -- all 1s or all 0s -- should be the highest energy configuration. (I think!) – Terry Bollinger Jul 10 '12 at 1:59
show 12 more comments
A very similar question is to how much energy (or mass) is required to store some quantity of information, regardless of the format. Whether you store your information with a voltage over a capictor of magnetic domain, to avoid corruption/read errors, the energy to store one bit should be $$E>> kT$$
In general, a good minimum is $E=6kT$. That's $10^{-20}\;\mathrm{J/bit}$ at room temperature, or $10^{-9}\;\mathrm{J} = 10^{-26}\;\mathrm{kg}$ for a 1 TB drive.
Note that this is a much lower number than David Zaslavsky's post. In general, electronic storage and processing uses more energy or power than the thermodynamic limit by many orders of magnitude.
-
1
All you need are energy barriers to flipping a bit, there doesn't have to be any energy difference between the two states, so the energy change between full and empty can be zero even if the two states don't thermodynamically mix. For example, for an abacus, there's no energy change for sliding the beads, just an enormous barrier to moving them a macroscopic distance thermodynamically. – Ron Maimon Jul 10 '12 at 20:00
1
Ron, I believe you're correct, and so my answer might be misleading. I do wonder whether information entropy can be thought of as a source of mass: to order bits, we must do work by removing entropy. Such an explanation would have an order of magnitude similar to my answer. – emarti Dec 18 '12 at 8:40
the problem is that both empty and full have the same information entropy, if you know what bits are written on the drive in both cases. It's not the value, but the randomness, that tells you the entropy. So I can't make sense of any answer except "no, they weigh the same". – Ron Maimon Jan 4 at 2:51
Lay question: what is the ">>" operator? I know two ways to read it ("much greater than" and "right shift bits") but I don't think either of those makes sense. Can you link to a layman-understandable explanation of either the operator or (even better) the equation? – Larry OBrien Feb 7 at 19:13
@OBrien, I mean "much greater than". The energy (or energy barrier) needs to be much greater than the thermal energy to prevent temperature-induced changes in the bits. Since we're thinking in terms of an exponential process, we can satisfy $E>>kT$ with $6 >>1$. – emarti Feb 10 at 21:09
Whether your hard drive is "filled" or not, it is formatted. This is how your computer is able to tell how big the drive is, for example. So to answer the question properly requires us to figure out the statistics of the number of digital domains in a freshly formatted drive and compare that with the statistics of the domains in one with (presumably random) data written to it.
A freshly formatted hard disk drive from the factory has zeroes stored in its sectors. See the interesting wikipedia article on formatting, especially this entry. If you wish to erase data on a hard drive it is not enough to "delete" it. One must also write zeroes over it so all those digital domains get stuck back to their newly formatted situation. Those zeroes do not mean that there is no magnetic domain changes. Instead, it means that the domain changes are in a particular pattern that encodes "0" as opposed to "1".
The encoding for hard disks is typically a "run length limited (RLL)" scheme. By "run length" they mean the number of consecutive domains that are oriented in the same direction. The limitation is to prevent this number from being too large as this would allow the hard disk reader to get out of sync with the data. Wikipedia claims that some media are also DC balanced with "some types of recording media", that is, there are just as many domains oriented one way as the other. I haven't seen this in recorded media but this is common to stuff like fast ethernet (PHY chips use it) or digital video standards such as HDMI which uses TMDS.
So the accepted post by David Zaslavsky is incorrect. However, the physics of it is correct and so I voted +1 for it. But this answer gives the "rest of the story"; life is not as simple as it looks sometimes.
-
A hard drive should not change results in any measurements of mass.
Background:
In Fert and Grünberg's original systems, a layer of non-magnetic chromium was sandwiched by layers of ferromagnetic iron. If the atomic spins in successive iron layers were oriented in the same direction, making the overall magnetisation of both layers parallel, electrons could also align their spins and pass through the material with little resistance. But electrical resistance shot up when the second iron layer had itsmagnetisation aligned antiparallel to the first. That's because the electrons which had oriented their spins with one set of iron atoms were then scattered on encountering the next layer. Fert's team used a series of iron layers with alternating magnetisation, which strengthened the effect on electron flow.
Reference:
http://www.rsc.org/chemistryworld/News/2007/October/09100703.asp
-
The entropy of an ordered drive is necessarily lower than one containing random bits. When one stores information on the drive that order can be observed as localized. Were one to be able to store data without transferring energy to the magnetic bits then the mass would not change. If the drive is in a previously ordered state as it will be if it has already been written to ( including if it was erased) then the information you write may actually create a net loss of order over the prior state. You lose that prior order as net energy to the environment. However, in the absence of a free lunch it would seem to follow that any order of the bits that can be retrieved requires a higher potential energy state than the absence of that localized information and this has a greater total mass energy.
-
There are system for which the maximum of entropy comes before the maximum of energy. Such system necessarily display negative temperatures. – dmckee♦ Jul 10 '12 at 0:59
The bits are represented by certain orientations of magnetic fields which shouldn't have any effect on the mass.
-
They add energy, so they add mass. – Ron Maimon Jul 5 '12 at 2:22
In an "empty" hard drive the magnetic moments are oriended in a way that there is no logical information on the hard drive. If we write something to the hard drive the orientation of the magnetic moments is changed in a way that the computer "understands" it. Changing the orientation of the magnetic moments should not change the mass. – L. Hazel Jul 5 '12 at 10:58
2
This is incorrect. Changing the orientation of the magnetic moments changes the energy and therefore changes the mass by a tiny amount. Whether it goes up or down depends on the details – Ron Maimon Jul 5 '12 at 14:45
OK, I guess I got it, thank you. – L. Hazel Jul 5 '12 at 17:29
the 1s and 0s are not filled up and removed based on removal or deletion of files, this is done in the FAT, no-one who answers you knows what type of charge is a 1 or a 0 in the "hard drive" as we have not specified which hard drive.
so even after the science nerds have come up with some convoluted reason it must be heavier (because electrons have mass), this will be based on the big assumption that there are more electrons in a full hard drive xD
There could very well be more electrons in an empty hard drive, actually though I recon it'll balance out, you see anywhere there is lots of one charge, it tends to repel the charge from the surroundings, the parts we would not measure that are all the many millions of redundant particles between the spaces that we have not yet utilized to store information yet
-
-1: It has nothing to do with extra electrons. The mass is the mass of the extra energy in the drive. This is theoretically interesting--- how much energy difference do you need to store a given amount of data semi-stably? There is a minimum energy required to change data, but barriers to spontaneous erasure can be large, so there is no energy difference which depends on the type of information you store. As far as your reckoning--- if you aren't a science nerd, why would anyone listen to you? – Ron Maimon Jul 5 '12 at 14:48
## protected by Qmechanic♦Feb 7 at 1:54
This question is protected to prevent "thanks!", "me too!", or spam answers by new users. To answer it, you must have earned at least 10 reputation on this site.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9458110332489014, "perplexity_flag": "middle"}
|
http://pgraycode.wordpress.com/2011/02/23/solving-xkcds-nerd-snipping-problem/?like=1&_wpnonce=268a98b8c4
|
# Code and Bugs
Coding things
### Solving XKCD’s Nerd Sniping problem
#### by Mithrandir
A while ago, during a period of free time, I implemented a solution in Haskell for the XKCD’s raptor problem. Now, I’ll try to solve another problem presented there, the one found in the Nerd Sniping comic. Of course, the implementation will still be in Haskell.
First, we have to define a data structure for keeping the network of resistors. I’ll use a structure used a while ago in a C algorithm for solving the network analysis problem but more simplified than what I used at the time. We only need to be interested in networks formed with resistors only, thus we can use only node elimination to solve this problem. Because of this, the data structures used are very simple:
``` 2
3 type Resistance = Double
4 type Conductance = Double
5
6 type Network a = [(a, Node a)]
7 type Node a = [WireTo a]
8 type WireTo a = (a, Resistance)
9 ```
As you see, we’ll use two association lists: one to keep the mapping between node ids (those can be almost anything) and one to keep the mapping between the node id and the resistance of the wire connected between that node and the actual node. While this structure prevents the know tying problem (homework: try solving this problem using techniques presented in the linked article), it adds duplicated information: for each wire between nodes a and b we store it in two places: once in node a and once in node b.
The previous paragraph hinted that there are two problems which need to be solved. Firstly, when comparing two wires and two nodes only the ids need to be taken care of, not everything in the pair. Thus, we need to implement our own equality functions.
``` 10 (===) :: (Eq a) => (a, b) -> (a, b) -> Bool
11 (a, _) === (b, _) = a == b
12
13 (=/=) :: (Eq a) => (a, b) -> (a, b) -> Bool
14 (=/=) a = not . (a ===) ```
To solve the second problem, I wrote two functions to the user and will demand (expect) that the user will use them to construct any network. Thus, the first function will construct a part of the network: a simple wire (similar to the `return` function from monads).
``` 66 -- builds a simple network: a simple wire
67 buildPart :: (Ord a) => a -> a -> Resistance -> Network a
68 buildPart a b r
69 | r < 0 = error "Negative resistance is not allowed"
70 | a == b = error "No wire can have the same ends"
71 | a > b = buildPart b a r
72 | otherwise = [(a, [(b, r)]), (b, [(a, r)])] ```
Using this function we can already construct the first test network, as seen in the following picture:
``` 117 {-
118 Simple test: a network with only a wire.
119 -}
120 testSimple :: Network Int
121 testSimple = buildPart 0 1 5 ```
Simple network with only a resistor
The second function will take two networks and construct a new one by joining them (similar to the `mplus` function from `MonadPlus` or `mappend` from `Monoid`).
``` 74 -- joins two networks
75 joinParts :: (Eq a) => Network a -> Network a -> Network a
76 joinParts [] ns = ns
77 joinParts (n:ns) nodes
78 | other == Nothing = n : joinParts ns nodes
79 | otherwise = (fst n, snd n `combineNodes` m) : joinParts ns nodes'
80 where
81 other = lookup (fst n) nodes
82 m = fromJust other
83 nodes' = filter (=/= n) nodes```
Care must be taken when joining two networks containing the same nodes. To combine them correctly, the following two functions are used:
``` 85 {-
86 Combines two nodes (wires leaving the same node, declared in two parts of
87 network.
88 -}
89 combineNodes :: (Eq a) => Node a -> Node a -> Node a
90 combineNodes = foldr (\(i,r) n -> addWireTo n i r)
91
92 {-
93 Adds a wire to a node, doing the right thing if there is a wire there already.
94 -}
95 addWireTo :: (Eq a) => Node a -> a -> Resistance -> Node a
96 addWireTo w a r = parallel $ (a,r) : w```
When we encounter two parallel resistances, we will transform them immediately.
``` 98 {-
99 Reduce a list of wires by reducing parallel resistors to a single one.
100 -}
101 parallel :: (Eq a) => [WireTo a] -> [WireTo a]
102 parallel [] = []
103 parallel (w:ws) = w' : parallel ws'
104 where
105 ws' = filter (=/= w) ws
106 ws'' = w : filter (=== w) ws
107 w' = if ws'' == [] then w else foldl1 parallel' ws''
108
109 {-
110 Reduce two wires to a single one, if they form parallel resistors.
111 -}
112 parallel' :: (Eq a) => WireTo a -> WireTo a -> WireTo a
113 parallel' (a, r1) (b, r2)
114 | a == b = (a, r1 * r2 / (r1 + r2))
115 | otherwise = error "Cannot reduce: not parallel resistors" ```
Right now, we can construct several more tests, presented in the following pictures
``` 123 {-
124 Second test: a network with two parallel wires.
125 -}
126 testSimple' :: Network Int
127 testSimple' = buildPart 0 1 10 `joinParts` buildPart 0 1 6
128
129 {-
130 Third test: a network with two series resistors.
131 -}
132 testSimple'' = buildPart 0 1 40 `joinParts` buildPart 1 2 2
133
134 {-
135 Fourth test: tetrahedron
136 -}
137 testTetra :: Network Int
138 testTetra
139 = buildPart 0 1 1 `joinParts`
140 buildPart 0 2 1 `joinParts`
141 buildPart 1 2 1 `joinParts`
142 buildPart 0 3 1 `joinParts`
143 buildPart 1 3 1 `joinParts`
144 buildPart 2 3 1 ```
A collection of networks
Now, we can even build 2D networks by giving Cartesian id’s to nodes. For example, the following is the simplest instance of the XKCD problem (reduced to a minimum configuration).
``` 146 {-
147 Fifth test: 2 squares
148 -}
149 testSquares :: Network (Int, Int)
150 testSquares = foldl joinParts [] . map (\(x, y) -> buildPart x y 1) $ list
151 where
152 list = [((0, 0), (0, 1)), ((0, 1), (0, 2)), ((0, 2), (1, 2)),
153 ((1, 2), (1, 1)), ((1, 1), (0, 1)), ((1, 1), (1, 0)), ((0, 0), (1, 0))] ```
Small 2D network
Before going into defining the XCKD problem and solving it, we need to implement the algorithm for solving any network of resistors. The following code is not optimized and I am really sure that it can be tweaked a little to allow for infinite structures due to the laziness of Haskell (to solve this is left as a homework). First, the code tests whether we compute a valid answer or not.
``` 16 {-
17 Starts the solving phase testing if each node is defined.
18 -}
19 solve :: (Ord a) => Network a -> a -> a -> Resistance
20 solve n st en
21 | stn == Nothing = error "Wrong start node"
22 | enn == Nothing = error "Wrong end node"
23 | otherwise = solve' n st en
24 where
25 stn = lookup st n
26 stnd = fromJust stn
27 enn = lookup en n```
After we are sure that the nodes in question exist in the network, we start the solving process
``` 29 solve' :: (Ord a) => Network a -> a -> a -> Resistance
30 solve' n st en
31 | null candidates = getSolution n st en
32 | otherwise = solve' (removeNode n (head candidates)) st en
33 where
34 candidates = filter (\(x,_) -> x /= st && x /= en) n```
When we have only the start and end nodes we return the solution
``` 63 getSolution :: (Eq a) => Network a -> a -> a -> Resistance
64 getSolution n st en = fromJust . lookup en . fromJust . lookup st $ n```
Otherwise, when we have a candidate node to be removed, we remove it:
``` 36 removeNode :: (Ord a) => Network a -> (a, Node a) -> Network a
37 removeNode net w@(tag, nod)
38 | length nod == 1 = filter (=/= w) net
39 | otherwise = filtered `joinParts` keep `joinParts` new
40 where
41 affectedTags = map fst nod
42 -- construct the unaffected nodes list
43 unaffectedNodes = filter (\(a,_) -> a `notElem` affectedTags) net
44 keep = filter (=/= w) unaffectedNodes
45 -- and the affected ones
46 change = filter (\(a,_)-> a `elem` affectedTags) net
47 -- remove the node from all the maps
48 filtered = map (purify tag) change
49 -- compute the sum of inverses
50 sumR = sum . map ((1/) . snd) $ nod
51 pairs = [(x, y) | x <- affectedTags, y <- affectedTags, x < y]
52 new = foldl joinParts [] . map (buildFromTags nod sumR) $ pairs
53
54 purify :: (Eq a) => a -> (a, Node a) -> (a, Node a)
55 purify t (tag, wires) = (tag, filter (\(a,_)->a /= t) wires)
56
57 buildFromTags :: (Ord a) => Node a -> Conductance -> (a, a) -> Network a
58 buildFromTags n s (x, y) = buildPart x y (s * xx * yy)
59 where
60 xx = fromJust . lookup x $ n
61 yy = fromJust . lookup y $ n```
We can test each of the previously defined networks to see if the algorithm works.
```*Main> solve testSimple 0 1
5.0
*Main> solve testSimple' 0 1
3.75
*Main> solve testSimple'' 0 1
40.0
*Main> solve testSimple'' 0 2
42.0
*Main> solve testTetra 0 3
0.5
*Main> solve testSquares (0, 0) (1, 2)
1.4000000000000001```
Now, the solution to the XKCD’s problem. First, we define the network: we will take a finite case, between $(-n,-n)$ and $(n,n)$. By increasing n we will get closer and closer to the actual result (as you will see later, the convergence is pretty good).
``` 155 build :: Int -> Network (Int, Int)
156 build n = foldl joinParts [] . map (\(x, y) -> buildPart x y 1) $ list
157 where
158 list = [(x, y) | x <- points, y <- points, x `neigh` y, x < y]
159 points = fillBelow n
160
161 fillBelow :: Int -> [(Int, Int)]
162 fillBelow n = [(x, y) | x <- [-n .. n], y <- [-n ..n]]
163
164 neigh :: (Int, Int) -> (Int, Int) -> Bool
165 neigh (a, b) (c, d) = abs (a - c) + abs (b - d) == 1 ```
An instance of the XKCD problem
Thus, to solve the problem for one iteration we have
``` 167 solveXKCD :: Int -> Resistance
168 solveXKCD n = solve (build n) (0, 0) (1, 2) ```
And the `main`, used for printing the results
` 170 main = mapM (print . \x -> (x, solveXKCD x)) [3..] `
And now the results. I let the program run until it reached a network of size 25 then I stopped it and plotted the results.
Resistance vs grid size, see the convergence speed
From the plot, it is easy to see that the valid result lies somewhere between $.774$ and $.772$ range, just like the valid answer is.
In the end, I’d like to relate to another post from that topic: please add more puzzles like this so that I can do something when I have free time instead of slacking off.
### Like this:
Published: February 23, 2011
Filed Under: Puzzles
### 7 Responses to “Solving XKCD’s Nerd Sniping problem”
1. correctionly says:
*sniping* … it’s nerd *sniping*
2. Mithrandir says:
My bad, fixed now
3. Loller says:
4. Mithrandir says:
Why?
5. Adam Hayward says:
Well worth watching and hilarious. Includes an intro by Peter Norvig and an appearance by Donald Knuth!
6. Mithrandir says:
Wow, thanks for the link :)
7. [...] Solving XKCD’s Nerd Sniping problem A while ago, during a period of free time, I implemented a solution in Haskell for the XKCD’s raptor problem. Now, I’ll [...] [...]
Cancel
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 4, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9021958112716675, "perplexity_flag": "middle"}
|
http://nrich.maths.org/7559
|
# Graphs of Changing Areas
##### Stage: 5 Challenge Level:
The graph below shows the curve $y=\frac{10}{x}$.
Imagine $x$ and $y$ are the length and width of a rectangle.
Each point on the curve represents a rectangle - what property do these rectangles share?
What symmetry does the graph have? How do you know?
What happens to the graph as $x$ gets very large? How do you know?
You could plot graphs of other curves such as $y=\frac{5}{x}$ or $y=\frac{20}{x}$.
How would these graphs relate to the one above? Would the graphs intersect? How do you know?
Rectangles of equal perimeter can be represented graphically by the line $y=\frac{1}{2}P-x$ where P is the perimeter.
Would you expect the line $y=\frac{1}{2}P-x$ to intersect with the curve $y=\frac{10}{x}$ for all values of P?
How can you use the graph to find the smallest possible perimeter of a rectangle with an area of $10$?
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9251477122306824, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/algebra/109487-problem-killing-me.html
|
# Thread:
1. ## This problem is killing me
Hi I'm new and need help desperately. I have to admit I am probably not as good in math than most of the people on this site, but I tried my best and just can't figure this problem out.
1.532 x 0.659 = 0.623
1.542 x 0.631 = 0.623
1.589 x 0.669 = 0.554
1.474 x 0.705 = 0.669
I know each line is simple of course, but the actual problem is X has to match on all 4 lines....
Any help is appreciated. Thanks in advance.
2. Im not exactly understanding what your looking for i dont see an X in there except what i would assume is a multiplication sign?
3. Originally Posted by Djaevel
Im not exactly understanding what your looking for i dont see an X in there except what i would assume is a multiplication sign?
Yeah sorry multiplication sign is the X I was referring to.
4. I'm sorry, but I don't understand what you're asking...?
Please reply with the full and exact text of the exercise and its instructions, including clarification of what "X" is supposed to indicate (is it a "times" sign, a variable, an empty slot to be filled in, or something else?) and what you mean by "X has to match" (match what? how?).
Thank you!
5. Originally Posted by stapel
I'm sorry, but I don't understand what you're asking...?
Please reply with the full and exact text of the exercise and its instructions, including clarification of what "X" is supposed to indicate (is it a "times" sign, a variable, an empty slot to be filled in, or something else?) and what you mean by "X has to match" (match what? how?).
Thank you!
X is an empty slot to be filled. When I said X needs to match is that it has to be the same on all 4 lines.
Here is the question:
Please find a formula that will solve all 4:
1.532 _ 0.659 = 0.623
1.542 _ 0.631 = 0.623
1.589 _ 0.669 = 0.554
1.474 _ 0.705 = 0.669
_ would be the blank space where the formula goes. I have no idea where to begin to create the formula.
6. Originally Posted by sitefeeder
X is an empty slot to be filled. When I said X needs to match is that it has to be the same on all 4 lines.
Here is the question:
Please find a formula that will solve all 4:
1.532 _ 0.659 = 0.623
1.542 _ 0.631 = 0.623
1.589 _ 0.669 = 0.554
1.474 _ 0.705 = 0.669
_ would be the blank space where the formula goes. I have no idea where to begin to create the formula.
from the first 2 lines:
The OPERATOR on the constants in line1
give the same results as the OPERATOR in line2 on those constants.
Line4 and Line5 have a constant (0.669) in common.
That is the key to defining what the operator does.
If ADDITION/SUBTRACTION
subtract line2 from line1
$<br /> 1.532 \,\,\square \,\, 0.659 = 0.623<br />$
$<br /> 1.542 \,\,\square \,\, 0.631 = 0.623<br />$
results in:
$<br /> -0.010 [ \, \square \, - \, \square \, ] +0.028 = 0<br />$
This does NOT hold if the OPERATOR is multiplication or division or exponentiation.
.
7. Thank you very much.
8. Actually the formula has to be the same on all 4 lines without using the other lines to solve it.
like this
1_2 = 3
2_1 = 3
0_3 = 3
3_0 = 3
The answer to the above example would be + without using the other lines to solve the problem.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.951968252658844, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/19243/do-the-empty-set-and-the-entire-set-really-need-to-be-open/19250
|
## Do the empty set AND the entire set really need to be open? [closed]
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
My question is motivated by the previous discussion 'Why is a topology made of open sets?'. While the axioms for arbitrary unions and finite intersections are without doubt essential to the concept of a topological space, the 1st axiom (the set itself and the empty set are open) seems rather technical. So, do we really need these conditions in order to build most (if not all) of the point-set-topology without significant changes? In other words, if we leave out the 1st axiom, can point-set-topology stil remain as useful and powerful in terms of what we actually need to do analysis and geometry?
EDIT: Expanded the topic name in accordance with my question.
-
15
The empty set is the empty union, and the entire set is the empty intersection. In other words, from a categorical perspective we want to keep both of them to ensure that the obvious category built from open sets has an initial and terminal object. – Qiaochu Yuan Mar 24 2010 at 22:45
3
I think a better justification for using the empty set as an open set is one which is not about the empty set as a space in its own right, but rather one which refers to examples of open sets in spaces that people are really interested in: nonempty topological spaces (Euclidean space, etc.). The question is not about why the empty set is a topological space, but why the empty set should be an open subset of a topological space. Incidentally, the one-point set would be a final object. – KConrad Mar 24 2010 at 22:55
3
@KConrad: I think Qiaochu was talking about the category of open subsets of a fixed topological space with morphisms given by inclusions. Then the empty set is initial and the whole space is terminal. – Rasmus Mar 24 2010 at 23:09
3
Rasmus: I see your point, but still I don't think this is an issue which should be settled by category theory. It's sort of like justifying the zero ring because it serves as an initial/final object in the category of rings. While that is a "high-level" argument in favor of allowing the zero ring, there is a justification for the zero ring that makes sense at a simpler level: I would like to create a ring $R/I$ for any ideal $I$, and when $I = R$ I must allow the zero ring. OK, now I see where that is going: why let $R$ be an ideal in $R$? Oy. – KConrad Mar 24 2010 at 23:21
3
I have voted to close. Yes, you could dispense with saying that the total space and the empty subspace are open at the cost of having to say "except for the total space / empty subspace" in many definitions and constructions. There's nothing really lost, because we do not, after all, really have a choice in the matter: if we want a clean definition, then it is forced on us by the axioms. Otherwise we get a sloppy definition, but not one with any hidden flaws. So nothing essential is gained or lost either way (except cleanliness). Four answers is enough, I think. – Pete L. Clark Mar 25 2010 at 0:01
show 6 more comments
## 7 Answers
Here's a boring reason, and it may or may not convince you: any function $f : X \to Y$ between topological spaces has the property that the preimage of the entire space $Y$ is the entire space $X$, and the preimage of the empty subset of $Y$ is the empty subset of $X$. So if you allow topological spaces in which either the entire set or the empty set is not open, there are no continuous functions from these spaces to "classical" topological spaces! Given that you agree with me that this is undesirable behavior, I think you are forced to make the entire set and/or the empty set either always open or always not open, and I think if you pick the second option then nothing changes except that, as KConrad says, it becomes unnecessarily harder to say things.
Actually, the situation is even worse: if the empty set isn't allowed to be open in $X$ then the continuous functions $X \to Y$ cannot miss any open set in $Y$, and if the entire set isn't allowed to be open in $X$ then the continuous functions $X \to Y$ cannot take values entirely in a proper open subset of $Y$. I think these are both much more unnatural than allowing the entire set and the empty set to be open. This is assuming you agree that the standard definition of continuity is natural.
-
These are indeed some significant changes if we leave out the 1st axiom. Thanks for pointing these out! I would not call them indesirable or unnatural though. It just has some different implications to continuous maps. – ex falso quodlibet Mar 25 2010 at 15:10
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
If the empty set and the whole space are not open, then many statements you would like to make about open sets need qualifying remarks. It really can happen that two open sets are disjoint (two open balls that are far apart) or their union is the whole space (an appropriate pair of open half-planes that overlap). If the empty set were not open then we would have to say that any finite intersection of open sets is open or is empty. You'd have to tack on "or empty" in a lot of statements (e.g., the complement of a closed set is open or is empty... I assume you would like to call the whole space closed?). It is easier to allow the empty set as an open set to avoid a profusion of "or empty" qualifiers in theorems.
If, as has been suggested in a comment, the issue being raised is whether or not that first axiom about topologies is simply redundant, it isn't. Without that axiom we could consider any single subset of a space as a topology on the space: that one set is closed under arbitrary unions and finite intersections of itself. In that setting the concept of an open cover loses its meaning, so it really seems like a dead end.
Edit: Without the whole space being allowed as open (which can happen for "topologies" without that first axiom), there need not be open coverings, and then the usefulness of point-set topology is seriously damaged.
-
1
Similarly, it can happen that two open sets have union the entire set, and we don't want to say that the union of open sets is open or the entire set... – Qiaochu Yuan Mar 24 2010 at 22:55
2
I don't think he wants to require that the empty set is never open, but rather to lift the restriction that it must be open. For instance, in algebraic geometry - say the theory of algebraic curves - a lot of statements might become simpler if we said that the closed sets are exactly the finite sets of points. This would be consistent, since the intersection of any two nonempty open sets on an irreducible topological space is nonempty. – zeb Mar 24 2010 at 23:06
2
I also think this does not answer the question. You are just saying that for a space admitting two disjoint open sets, this axiom follows from the intersection axiom. So only irreducible topological spaces remain. For these, calling the empty set open or not more is a matter of conventions. But I guess this could change the notion of continuos function for non surjective functions. – Andrea Ferretti Mar 24 2010 at 23:18
1
The empty set is vacuously a finite set by any definition of finite... – Harry Gindi Mar 24 2010 at 23:19
1
Andrea, I was giving as an example one subset by itself. That would fit the conditions of a "topology" without the first axiom, but then the space doesn't admit an open covering in that "topology". It's hard to do much at all without open coverings. – KConrad Mar 24 2010 at 23:25
show 4 more comments
If one wants constant functions to always be continuous, then one must necessarily have the empty set and the whole space to be open.
From a category theory perspective, it is the continuous functions that are the more fundamental building block of topology, than the open or closed sets. I believe that there is some equivalent way to axiomatise topology via continuous functions using the machinery of sheaves, which is in some ways more "natural" than the simple but somewhat arbitrary-looking axioms for open sets, but I am not an expert on these matters.
-
Right. But as with Jonas Meyer's commment above, you can work around this in a silly way if you want to: say a map between topological spaces is continuous iff the preimage of every open set is either open, or the empty set, or the whole space. (Note that you don't need to also check the preimages of the empty set and the whole space: there can be no surprises there.) – Pete L. Clark Mar 25 2010 at 3:27
1
The first sentence is not 100% accurate; any function into a space Y is continuous if Y has no open sets! – Qiaochu Yuan Mar 25 2010 at 5:35
1
It's not clear to me how much of a problem it is sometimes to have discontinuous constant functions, if the openness of the empty set follows whenever you have two disjoint open sets (and the openness of the whole space follows if every point has a neighbourhood). One could then define a space to be T_{-3} if the empty set and the whole space happen to be open and comment that all reasonable spaces are T_{-3}. But non-T_{-3} spaces are just too silly to be worth considering, so one doesn't do this. – gowers Mar 25 2010 at 9:25
This question may be better answered by going to the intuition.
Let X be a set with some sort of "local structure"; a metric space will do, but any reasonable sort of "closeness" is fine. We want to say a set U ⊆ X is open if, whenever U contains a point p, it also contains all those points q ∈ X which are "near p".
Given this, U = X is clearly open. If p ∈ U, we want all the points of X "near p" to be in U; but every point of X is in U, so this is trivially true.
∅ is also clearly open. For this to be false, ∅ would have to contain a point p but not all the points near p; but as ∅ contains no points, this is impossible.
Also consider the following: Let f : X → {0,1} be the constant function sending every p ∈ X to 0. We want constant functions to be continuous, no matter how fine the topology on {0,1}. So f -1[{0}] = X and f -1[{1}] = ∅ both must be open (and both must be closed).
-
I see a little more in the question now. It seems that the OP is proposing to eliminate the axioms that the empty space and the total space are open but maintain the axioms that arbitrary (nonempty!) unions and finite (nonempty!) intersections of open sets are open.
In this case there is a little content here, because you can try to figure out whether or not $\emptyset$ and $X$ will then be open or not.
Note one disturbing fact: with the elimination of the above axiom, there is nothing to imply the existence of any open sets at all! Whether this is good or bad, if it happens there is nothing further to say, so let's assume that there is at least one open set.
Claim: If $X$ is a Hausdorff with more than one point, then the empty set and the total space are open.
Proof: Indeed, the Hausdorff axiom asserts that any two points have disjoint open neighborhooods, so the intersection of these is empty. The same axiom says in particular that every point has at least one open neighborhood (!) which is clearly equivalent to $X$ being open.
Claim: If $X$ is T1 with more than one point, then the total space is open, but the empty set need not be.
Proof: T1 means that the singleton sets are closed. If there are least two of them, take their intersection: this makes the empty set closed, hence the total space open. On the other hand, the cofinite topology on an infinite set is T1 and the empty set is not open (if we do not force it to be).
I am coming around to agree with K. Conrad in that having $X$ be open in itself may not be just a formality. A topological space in which some point has no neighborhood sounds like trouble...I guess I was thinking that if you get into real trouble, you can just throw $X$ back in as an open set! If you want to put it that way, this is some kind of completion functor from the OP's generalized topological spaces to honest topological spaces.
-
1
Anything you can say in topology applies only to the union of all of the open sets in X. If there are points in X not in this union, they might as well not be there; in other words, I see no conceivable reason to exclude the entire set from being open. – Qiaochu Yuan Mar 25 2010 at 1:38
1
I just noticed something amusing in the above rather trivial claims. If you take T_1 to mean (as I did) that points are closed, then in the above world of non-automatic openness of $\emptyset$ and $X$, Hausdorff does not imply T_1: consider a one point space with non-open empty set! – Pete L. Clark Mar 25 2010 at 3:30
To address the question in a somewhat less-categorical way, I would point out that you can in fact do all of topology without using the expression "open set", by instead refering to the filter of neighborhoods of every point --- then a function $f:X\rightarrow Y$ is continuous iff for all $x\in X$ and every neighborhood $V$ of $f(x)$ there is a neighborhood $U$ of $x$ such that $f(U)\subset V$. You'll remember this as the "epsilon-delta"-style of continuity criterion from calculus, only without mentioning $ε$ or $δ$. The viewpoints are equivalent in the sense that one can define an open set as being a neighborhood of all its points, or contrariwise define a neighborhood of $x$ as including some open set containing $x$.
There's another alternative, to study presentations of topologies; usually a base for a topology is given. These have the advantage of being available as raw data, because the closure requirements for a base are much less stringent from a set-theoretic point of view than those of the whole system of open sets; one consequence is that you can study the topological space $(X,\langle B\rangle)$ in any universe including $X$, $B$ and $(X,B)$. This notion of a presented topology then lets you compare how the properties of a space-given-the-base change after a forcing extension of the universe --- if you're into that sort of thing.
The moral of this story is that open sets are not really the object of study in topology, and changing what you mean by "open set" is mostly going to distract you; continuous functions are one object of study --- as others have pointed out --- and there are many ways to define those.
-
One can define a topological space in terms of closed sets to make this easy to see.
A topological space $T$ satisfies:
- The intersection of any sets $X_i \in T$ is closed (i.e., in $T$)
- The union of finitely many sets $X_i \in T$ is closed.
Then, if we take, e.g., $T=\mathbb{R}$, by the first property above, $(0,1) \cap (10,20) = \emptyset \in T$. So we should consider the empty set to be closed.
The complement of $\emptyset$ is $\mathbb{R}$, so the whole space is closed as well (there was clearly nothing particular about using the reals here so this holds for arbitrary $T$). So we should consider $T$ to be closed as well. [No, you have used the existence of disjoint open sets, which does not hold in a general topological space -- PLC.]
By De Morgan's laws, one can check that the above characterizations of a topological space are equivalent to the usual definition in terms of open sets, so for consistency we should require $\emptyset$ and $T$ to be both open and closed in addition to the above two properties.
-
1
This is not true. – Harry Gindi Mar 24 2010 at 23:22
3
To elaborate, all that you've done is show that the empty set must be closed, from which it follows that the whole space should be open! A tautology! – Harry Gindi Mar 24 2010 at 23:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 52, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9559502005577087, "perplexity_flag": "head"}
|
http://mathhelpforum.com/differential-geometry/185879-cauchy-residue-theorem-integral.html
|
# Thread:
1. ## Cauchy Residue Theorem Integral
Hey Guys,
Could you please clarify this problem to me?
When I separate the denominator, I get two values that I can use in the Cauchy Residue theorem.
Thus, should I only take one value (depending on the contour I use obviously), if so do I take the negative one or the positive one?
ImageShack® - Online Photo and Video Hosting
2. ## Re: Cauchy Residue Theorem Integral
Originally Posted by mathshelpee
Hey Guys,
Could you please clarify this problem to me?
When I separate the denominator, I get two values that I can use in the Cauchy Residue theorem.
Thus, should I only take one value (depending on the contour I use obviously), if so do I take the negative one or the positive one?
ImageShack® - Online Photo and Video Hosting
What precisely do you mean? What contour would you use?
3. ## Re: Cauchy Residue Theorem Integral
Originally Posted by mathshelpee
Hey Guys,
Could you please clarify this problem to me?
When I separate the denominator, I get two values that I can use in the Cauchy Residue theorem.
Thus, should I only take one value (depending on the contour I use obviously), if so do I take the negative one or the positive one?
ImageShack® - Online Photo and Video Hosting
The integral is...
$\int_{- \infty}^{+ \infty} \frac {e^{i x}}{(x-\pi)^{2}+a^{2}}$ (1)
The solution with the Cauchy residue theorem requires the choice os an integration path in the complex plane. In this case it is adequate the path of figure...
The poles of the complex function in (1) are at $z=\pi \pm i\ a$ and inside the path is those with positive imaginary part...
Kind regards
$\chi$ $\sigma$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.845971405506134, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/4656-area-between-curve-x-axis.html
|
Thread:
1. Area between curve and x axis
If y = √ 4 - x^2 then the area of the region limited by this curve y and the x axis is
a) ¶
b) 2¶
Calculate the correct result.
2. Originally Posted by bret80
If y = √ 4 - x^2 then the area of the region limited by this curve y and the x axis is
a) ¶
b) 2¶
Calculate the correct result.
You have,
$y=\sqrt{4-x^2}$
A semi-circle with radius 2.
Instead of finding the complicated,
$\int_{-2}^2 \sqrt{4-x^2}dx$
Simplfy, use the formula from geometry.
The area of the circle is,
$\pi (2)^2=4\pi$
Since this is half the answer is,
$2\pi$
3. Hello, bret80!
If $y \:= \:\sqrt{4 - x^2}$, then the area of the region limited by $y$ and the x-axis is
$a)\;\pi\qquad b)\;2\pi$
Calculate the correct result.
If you are expected to do this with Calculus,
. . you need to know Trig Substitution (and a bit of Trig).
I'll baby-step through it for you . . .
$A\;=\;\int^2_{-2}\sqrt{4-x^2}\,dx$
Let $x = 2\sin\theta\quad\Rightarrow\quad dx = 2\cos\theta\,d\theta$
. . and $\sqrt{4 - x^2} \:=$ $\:\sqrt{4 - 4\sin^2\theta} \:=\:\sqrt{4(1 - \sin^2\theta)} \:=\:\sqrt{4\cos^2\theta} \:=\:2\cos\theta$
Substitute: . $A\;=\;\int(2\cos\theta)\,(2\cos\theta\,d\theta) \;= \;4\int\cos^2\theta\,d\theta$
Double-angle identity: . $\cos^2\theta \:=\:\frac{1 + \cos2\theta}{2}$
We have: . $A\;=\;4\int\left(\frac{1 + \cos2\theta}{2}\right)\,d\theta \;= \;2\int(1 + \cos2\theta)\,d\theta$
Then: . $A \;= \;2\left(\theta + \frac{1}{2}\sin2\theta\right) \;= \;2\left(\theta + \sin\theta\cos\theta\right)\,\bigg]^{\frac{\pi}{2}}_{\text{-}\frac{\pi}{2}}$ **
. . $A\;= \;2\left[\frac{\pi}{2} + \sin\frac{\pi}{2}\cos\frac{\pi}{2}\right] - 2\left[\text{-}\frac{\pi}{2} + \sin\left(\text{-}\frac{\pi}{2}\right)\cos\left(\text{-}\frac{\pi}{2}\right)\right]$
. . $A \;= \;2\left[\frac{\pi}{2} + (1)(0)\right] - 2\left[\text{-}\frac{\pi}{2} + (\text{-}1)(0)\right] \;= \;\pi + \pi \;= \;\boxed{2\pi}$
~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~
**
A change of limits . . .
Since $x \,= \,2\sin\theta\quad\Rightarrow\quad\sin\theta \,= \,\frac{x}{2}$
. . . . $\begin{array}{ccc}x \,= \,\text{-}2,\quad\Rightarrow\quad \theta \,= \,\text{-}\frac{\pi}{2} \\ \\x \,= \,2,\quad\Rightarrow\quad\theta \,= \,\frac{\pi}{2}\end{array}$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 19, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7379959225654602, "perplexity_flag": "middle"}
|
http://electronics.stackexchange.com/questions/13746/why-does-a-resistor-need-to-be-on-the-anode-of-an-led
|
# Why does a resistor need to be on the anode of an LED?
Please be kind, I am an electronics nub. This is in reference to getting an LED to emit photons.
From what I read (Getting Started in Electronics - Forrest Mims III and Make: Electronics) electrons flow from the more negative side to the more positive side.
In an example experiment (involving a primary dry cell, a SPDT switch, a resistor and an LED) it states that the resistor MUST be connected to the anode of the LED. In my mind, if the electrons flow from negative to positive, wouldn't the electron flow run through the LED before the resistor; thereby making the resistor pointless?
-
7
While electrons flow from negative to positive, it's usually better to just play along with the convention that current flows from positive to negative so you don't confuse anyone when you talk about it. – Nick T May 2 '11 at 12:08
– endolith Jun 6 '11 at 19:20
– endolith Dec 16 '11 at 19:26
## 7 Answers
The resistor can be on either side of the LED, but it must be present. When two or more components are in series, the current will be the same through all of them, and so it doesn't matter which order they are in. I think the way to read "the resistor must be connected to the anode" as "the resistor cannot be omitted from the circuit."
-
No it would not make the resistor pointless. Imagine if the resistor were so large it completely prevented electrons from flowing. Does it matter which side of the LED it's on? Either way, it will break the circuit and prevent current from flowing.
Don't think about individual particles traveling through the circuit. The charged particles are not "used up" by the LED. They go through it, and their motion is what carries energy from one place to another.
Think about all the particles moving at all points in the circuit at once, like a belt or chain. If you slow down the chain at one point, it slows down at every other point, too, due to the links pushing and pulling against each other.
I read Getting Started in Electronics as a kid, and I think it teaches ideas like this poorly. I had to unlearn everything in college and don't recommend it. Try this instead:
Try out this circuit. When you adjust the resistance, does it only slow down the charges before the resistor, or does it change the speed of all the charges in the entire circuit?
-
1
I love the "chain" or "link" analogy. I have been told other analogies, but none as good as those. – Kellenjb May 2 '11 at 13:04
1
A hand-crank on one end of a chain and a load on the other end. When you turn it, the entire chain moves, but it's not used up. Energy is transferred by the chain links pulling on each other, and this pulling movement travels very quickly from source to load, even if the chain links themselves move slowly. The only downside of the analogy is that the links only pull, they don't push on the other half of the circuit. Pipes filled with water works a little better. – endolith May 2 '11 at 15:39
I've used pipes filled with water some, but in that analogy I find that people tend to think of the water "going away" as soon as it comes out of the end. Guess it depends on who you are talking to and what you are trying to explain for what works the best. – Kellenjb May 2 '11 at 15:52
A loop of pipe, completely filled with water, with no outlets. An outlet would be like electric charge spraying out of the end of a wire, which it doesn't do. If the pipe is cut, it immediately forms caps to prevent the water from escaping. :) If you pump water at one point in the loop, it pushes and pulls and causes the water in the entire pipe to loop at the same speed. If you use a piston to move it back and forth, the energy travels in waves at the speed of sound in water, while the water itself moves slowly. – endolith May 2 '11 at 17:57
1
That Circuit Simulator applet is awesome ... played with it for 30 minutes now. :) – Spechal May 4 '11 at 5:58
Regardless of what side the resistor is placed on, it limits the amount of current that flows through the LED. It is usually a lot simpler to not think about what the electrons are doing and instead just think about it in sense of Resistance, Current, Voltage, and sometimes power.
In the case of an LED, if you connect a constant voltage source across the LED, the LED will act like almost 0 resistance, which will based off of V=IR (or V/R=I), will result in very large current, which causes the LED to "pop".
You have to connect a resistor in order to set the current that your LED is expecting.
-
The resistor doesn't need to be on the anode side, but it needs to be there (unless the voltage of the power supply is equal to or less than the voltage drop of the LED.)
After all, if you have a 9 volt power source, and an LED that drops 2 volts, then the other 7 volts have to get dropped someplace.
-
If monitoring the current through the LED is important for you, put the resistor on the low side. So it will be easier to measure the current on each LED. By "easier" I mean, you fix one probe of the voltmeter to GND, and use the other one only to read the voltages on the resistors. So the current through the LED will be:
$I_{LED} = \frac{V_R}{R} \\ \begin{matrix} I_{LED} & : & \mbox{The current through the LED} \\ V_R & : & \mbox{The voltage on the resistor} \\ R & : & \mbox{Resistor series with the LED} \\ \end{matrix}$
If you want to monitor the voltages on the LEDs, the you should connect the LEDs to he low side. So, you can read the voltages by fixing one of the probes to GND.
IF you don't care about the voltages or currents on/through the LEDs (e.g.; you are working with a digital circuit, or the LED is only an indicator), then it doesn't matter which side you connect the LEDs and the resistors.
-
Look at the Forrest Mims III book again. It does not claim that resistors must be on the anode and has examples where they are on the cathode. In my 1988 edition of the book, series protection for LEDs is introduced on P. 69:
LED DRIVE CIRCUIT - Because LEDs are current dependent, it's usually necessary to protect them from excessive current with a series resistor. Some LEDs include a built-in series resistor. Most do not.
A formula is then given about how to calculate the resistance from the supply voltage and the LED's forward current. The accompanying diagram has the resistor on the anode, neglecting to explain that the choice is arbitrary.
However, on the same page, a "LED polarity indicator" device is introduced where two back-to-back LEDs share a resistor which is necessarily on the anode of one and the cathode of the other. In the "tri-state polarity indicator", the limit resistor is on the supply side, rather than ground side, too.
It's usually nicer in some sense (if there is a choice) to have the important device be connected to ground, and the surrounding paraphernalia, like a biasing resistors, to be on the supply side.
In high voltage circuits, the choice between supply-side or ground-side load matters from a safety perspective. For instance, should you place light switch on the hot side of the lamp, or on the neutral? If you wire the switch so that the light is turned off by interrupting the neutral return, that means that the light bulb socket is permanently connected to hot! This means that if someone turns off the switch before changing the bulb is not actually any safer; the main panel has to be used to actually break the hot connection to the socket. In a battery circuit, there is no safety ground: the minus terminal is arbitrarily designated as the common return, and the word "ground" is used for that common.
Whether a load device is ground side or supply side also makes a difference if the voltage from the device is being conveyed to some other circuit where it is used for some purpose. A 1.2V LED whose anode is connected to 5V will provide a 3.8V reading from the cathode, if current flows. If the cathode is grounded instead, then the anode will provide a 1.2V reading. So the placement of the resistor only doesn't matter if no such situation exists in the circuit: there is no third connection to the junction between the resistor and the LED which has an effect on some other circuit.
-
it does not have to be on which side anode or cathode since it does not have a polarity. But, i do it in an anode side for a single LED and cathode side on a series LED. parallel connections on cathode side.
-
2
It's not because it doesn't have polarity. Another diode has polarity and can also be placed on either side. And parallel connections on cathode side isn't correct either: again both sides are ok. – Federico Russo Oct 7 '12 at 11:35
## protected by stevenvhOct 7 '12 at 11:36
This question is protected to prevent "thanks!", "me too!", or spam answers by new users. To answer it, you must have earned at least 10 reputation on this site.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9551851153373718, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/27830/the-number-of-independent-variables-in-the-lagrangian-and-hamiltonian-methods-in/27831
|
# The number of independent variables in the Lagrangian and Hamiltonian methods in Classical Mechanics
It's told in Landau - Classical Mechanics, that in the Hamiltonian method, generalized coordinates $q_j$ and generalized momenta $p_j$ are independent variables of a mechanical system. Anyway, in the case of Lagrangian method only generalized coordinates $q_j$ are independent. In this case generalized velocities are not independent, as they are the derivatives of coordinates.
So, as I understood, in the first method, there are twice more independent variables, than in the second. This fact is used during the variation of action and finding the equations of motion.
My question is, can the number of independent variables of the same system be different in these cases? Besides that, how can the momenta be independent from coordinates, if we have this equation $$p=\frac{\partial L}{\partial \dot{q}}$$
Thank you very much! I hope that my question is clear.
-
If you like this question you may also enjoy reading this, this, and this Phys.SE posts. – Qmechanic♦ May 5 '12 at 17:53
## 3 Answers
$q_j$ and $\dot q_j$ are independent. I think it's more straightforward (at first) to think of this in terms of Newton's equations of motion, where the force determines the accelerations of the various particles, than in terms of the more abstract Hamiltonian methods. Because the forces determine the accelerations, not the velocities, both the initial positions and the initial velocities have to be given to determine the trajectories, which is just to say that the $q_j$ and the $\dot q_j$ independently determine the trajectories.
Note that the Lagrangian function is written as a function of both $q_j$ and $\dot q_j$, $L(q_j,\dot q_j)$, which makes sense of the equation for the momenta that you cite, $p_j=\frac{\partial L(q_j,\dot q_j)}{\partial \dot q_j}$.
So, there are the same numbers of independent variables in the Lagrangian and in the Hamiltonian formalisms.
-
1L) The (generalized) position $q$ and (generalized) velocity $v$ are independent variables of the Lagrangian $L(q,v,t)$.
1H) The position $q$ and momentum $p$ are independent variables of the Hamiltonian $H(q,p,t)$.
2L) The position path $q:[t_i,t_f] \to \mathbb{R}$ and velocity path $\dot{q}:[t_i,t_f] \to \mathbb{R}$ are not independent in the Lagrangian action $$S_L[q]~=~ \int_{t_i}^{t_f}\!dt \ L(q ,\dot{q},t).$$ See also this question.
2H) The position path $q:[t_i,t_f] \to \mathbb{R}$ and momentum path $p:[t_i,t_f] \to \mathbb{R}$ are independent in the Hamiltonian action $$S_H[q,p]~=~\int_{t_i}^{t_f}\! dt~(p \dot{q}-H(q,p,t)).$$
3L) Under extremization of the Lagrangian action $S_L[q]$ wrt. the path $q$, the corresponding equation for the extremal path is Lagrange's equation of motion $$\frac{d}{dt}\frac{\partial L(q,\dot{q},t)}{\partial \dot{q}} ~=~ \frac{\partial L(q,\dot{q},t)}{\partial q}.$$
3H) Under extremization of the Hamiltonian action $S_H[q,p]$ wrt. the paths $q$ and $p$, the corresponding equations for the extremal paths are Hamilton's equations of motion $$-\dot{p}~=~\frac{\partial H}{\partial q} \qquad \text{and}\qquad \dot{q}~=~\frac{\partial H}{\partial p} ,$$ respectively.
4L) The equation $p=\frac{\partial L}{\partial v}$ is a definition in the Lagrangian formalism. E.g., for a non-relativistic free point particle, it encodes the relation $p=mv$.
4H) The equation $\dot{q}=\frac{\partial H}{\partial p}$ is an equation of motion in the Hamiltonian formalism. E.g., for a non-relativistic free point particle, it encodes the relation $p=m\dot{q}$.
-
Perfect realization of the answer, Qmechanic! – Luboš Motl May 5 '12 at 18:57
@Qmechanic : Could you please give the explanation of the Hamiltonian action. Why does it depend on momentum path? – achatrch May 5 '12 at 19:29
I updated the answer. – Qmechanic♦ May 5 '12 at 20:08
I meant this as a comment to Peter Morgan's answer but it got too long to fit.
For Lagrangians that are quadratic in the generalized velocities $\dot{q}_i$, $i\ldots N$, the $N$ equations of motion obtained by the Euler-Lagrange equations will be second order in time whereas Hamilton's equations of motion for $(q_i, p_i)$, $i\ldots N$ are first order in time. So the number of independent variables are the same.
As said in the Peter's answer think of the one equation $$m \ddot{\mathbf{r}} = \mathbf{F}$$ versus the two equations $$\dot{\mathbf{r}} = \mathbf{p}/m, \qquad \dot{\mathbf{p}}=\mathbf{F}.$$
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9059125185012817, "perplexity_flag": "head"}
|
http://mathematica.stackexchange.com/questions/2042/simpler-way-of-performing-gaussian-elimination
|
# Simpler way of performing Gaussian Elimination?
Is there a simpler way of performing Gaussian Elimination other than using RowReduce? Such as a single built in function?
Edit:
Look at the example from our simulation class. Not too difficult, but using this method to solve problems of the sort is new to most of us. We are solving for P# of course.
Also, to those asking... While I can see what is going on below, I really don't understand what it all means. Asking why not RowReduce? I guess not everyone is at that level of use yet, and I don't like just cutting and pasting internet code without understanding it. I simply wondered if there was a function that would do what the code did, but be built in.
```` GaussianElimination[m_?MatrixQ, v_?VectorQ] :=
Last /@ RowReduce[Flatten /@ Transpose[{m, v}]]
````
-
5
What's wrong with `RowReduce`? How is it not satisfactory and what do you mean by "simpler" solution? – rm -rf♦ Feb 20 '12 at 7:36
1
Do you mean LinearSolve? – ruebenko Feb 20 '12 at 8:31
2
If you just need to solve an equation, use `LinearSolve`, no need for implementing a specific method. If you need the specific method, then what R.M. said. – Szabolcs Feb 20 '12 at 8:31
Simpler, like a function versus having to device my own. I will look at LinearSolve, but might just use RowReduce. – FossilizedCarlos Feb 20 '12 at 8:42
1
Could you fill in some of the mathematical details in you question, then we could better direct you to the function you should use? – rcollyer Feb 21 '12 at 2:26
show 2 more comments
## 1 Answer
Based upon your update, you are trying to solve the system
$$\mathbf{A}\vec{x} = \vec{b}$$
for $\vec{x}$, so `LinearSolve` is exactly what you want. Also, it has the exact form
````LinearSolve[A, b]
````
that you're asking for. Internally it uses a form of Gaussian elimination to solve such systems; this is most likely a variant of LU decomposition, but other methods are available. If you have more than one $\vec{b}$, you can use the form
````solv = LinearSolve[A]
````
which returns a `LinearSolveFunction` which you can apply to each $\vec{x}$ in turn via
````solv[b]
````
Edit: In the case of your example, `RowReduce` will return the identity matrix as your matrix is invertible (non-singular), so it would not be immediately useful. You could make it "useful" and create an augmented matrix, via
```` augA = ArrayFlatten[{{#, IdentityMatrix[Length@#]}}]& @ A
````
which creates $$\left(\mathbf{A}\, |\, \mathbf{I} \right).$$ Then,
```` redAugA = RowReduce[augA]
````
gives a matrix of the form $$\left(\mathbf{I}\, |\, \mathbf{A}^{-1} \right),$$ and the inverse is extractable via
````redAugA[[All, Length@A + 1 ;; ]]
````
which uses the shorthand form of `Part` and `Span` to extract only the columns you want. But, if your going to go to the trouble of getting the inverse, you might as well use `Inverse[A]` directly.
However, if your matrix is singular, i.e. `MatrixRank[A] < Length[A]`, then you need to use `LeastSquares` which returns the vector, $\vec{x}$, that minimizes $\lVert\mathbf{A}\vec{x} - \vec{b}\rVert_2$ where $\lVert\cdot\rVert_2$ refers to the standard Euclidean norm. Which has the same calling convention
````LeastSquares[A, b]
````
but it lacks the pre-calculation capabilities of `LinearSolve`. If you need those, then you would first decompose the matrix using `QRDecomposition` and then `LinearSolve` is used, as follows
````{q,r} = QRDecomposition[A];
LinearSolve[r, q.b]
````
Or, if you want a single function that operates like the second form of `LinearSolve` but with the least squares minimization,
````savedLeastSquares[m_?MatrixQ]:=
Module[{q,r},
{q,r} = QRDecomposition[m];
LinearSolve[r, q.#]&
]
````
-
One could also use `SingularValueDecomposition[]` instead of `QRDecomposition[]`... – J. M.♦ May 4 '12 at 5:17
lang-mma
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9221402406692505, "perplexity_flag": "middle"}
|
http://gilkalai.wordpress.com/2010/10/21/polymath3-polynomial-hirsch-conjecture-4/?like=1&source=post_flair&_wpnonce=a832ba55c5
|
Gil Kalai’s blog
## Polymath3: Polynomial Hirsch Conjecture 4
Posted on October 21, 2010 by
So where are we? I guess we are trying all sorts of things, and perhaps we should try even more things. I find it very difficult to choose the more promising ideas, directions and comments as Tim Gowers and Terry Tao did so effectively in Polymath 1,4 and 5. Maybe this part of the moderator duty can also be outsourced. If you want to point out an idea that you find promising, even if it is your own idea, please, please do.
This post has three parts. 1) Around Nicolai’s conjecture; 1) Improving the upper bounds based on the original method; 3) How to find super-polynomial constructions?
## 1) Around Nicolai’s conjecture
### Proving Nicolai’s conjecture
Nicolai conjectured that $f^*(d,n) \le d(n-1)+1$ and this bound, if correct, is sharp as seen by several examples. Trying to prove this conjecture is still, I feel, the most tempting direction in our project. The conjecture is as elegant as Hirsch ‘s conjecture itself.
Some role models: I remember hard conjectures that were proved by amazingly simple arguments, like in Adam Marcus’s and Gabor Tardos’s proof of the Stanley-Wilf conjecture, or by an ingenious unexpected algebraic proof, like Reimer’s proof of the Butterfly lemma en route to the Van den Berg Kesten Conjecture. I don’t have the slightest idea how such proofs are found.
### More general settings.
In some comments, participants offered even more general conjectures with the same bound which may allow some induction process to apply. (If somebody is willing to summarize these extensions, that would be useful.)
Do you think that there is some promising avenue to attack Nicolai’s conjecture?
### Deciding the case d=3.
Not much has happened on the $f^*(3,n)$ front.
### What about f(d,n)?
ERSS do not give a quadratic lower bound for f(d,n) but only such a bound up to a logarithmic factor. Can the gap between sets and multisets be bridged?
And what about f(2,n); do we know the answer there?
### Disproving Nicolai’s conjecture
This is a modest challenge in the negative direction. The conjecture is appealing but the evidence for it is minimal. This should be easier than disproving PHC.
## 2) Improving the upper bounds based on the original method.
Remember that the recurrence relation was based on reaching the same element in sets from the first $f^*(n/2)$ families and from the last $f^*(n/2)$ families. The basic observation is that in the first $f^*(k)+1$ families, we must have multisets covering at least k+1 elements altogether.
There should be some “tradeoff”: Either we can reach many elements much more quickly, or else we can say something about the structure of our families which will help us.
What will this buy us? If we replace f(n/2) by f(n/10) the effect is small, but replacing it by $f(\sqrt n)$ will lead to a substantial improvement (not yet PHC).
Maybe there is hope that inside the “do loop” we can cut back. We arrived at a common ‘m’ by going $f(n/2)$ from both ends. We can even reach many ‘m’s by taking $f(2n/3)$ steps from both ends. But then when we restrict ourselves to sets containing ‘m’, do we really start from scratch? This is the part of the proof that looks most wasteful.
Maybe looking at the shadows of the families will help. There were a few suggestions along these lines.
What do you regard as a promising avenue for improving the arguments used in current upper bound proofs?
## 3) How to find super-polynomial constructions?
Well, I would take sets of small size compared to n. And we want the families to be larger as we go along, and perhaps also the sets in the families to be larger. What about taking, say, at random, in ${\cal F}_1$ a few small sets, and in \${\cal F}_2\$ much larger sets and so on? Achieving convexity (condition (*)) is difficult.
Jeff Kahn has (privately) a general sanity test against such careless suggestions, even if you force this convexity somehow: See if the upper bound proof gives you a much better recurrence. In any case, perhaps we should carefully check such simple ideas before we try to move to more complicated ideas for constructions? Maybe we should try to base a construction on the upper bound ideas. In some sense, ERSS constructions and even Nicolai’s simple one resemble the proof a little. But it goes only “one level”. It takes a long time to reach from both ends sets containing the same element, but then multisets containing the common ‘m’ use very few elements. What about Terry’s examples of families according to the sum of indices? (By the way, does this example extend to d>3?) Can you base families on more complicated equations of a similar nature?
Anyway, it is perhaps time to talk seriously about strategies for counterexamples.
What do you think a counterexample will look like?
### Like this:
This entry was posted in Combinatorics, Convex polytopes, Open discussion, Open problems, Polymath3 and tagged Hirsch conjecture, Polymath3. Bookmark the permalink.
### 73 Responses to Polymath3: Polynomial Hirsch Conjecture 4
1. Gil Kalai says:
Nick HArvey suggested that a certain method by Seymour will be relevant for improving the upper bound in this comment. http://gilkalai.wordpress.com/2010/10/10/polymath3-polynomial-hirsch-conjecture-3/#comment-3811
2. Pingback: Tweets that mention Polymath3: Polynomial Hirsch Conjecture 4 | Combinatorics and more -- Topsy.com
3. Pingback: Polymath3 « Euclidean Ramsey Theory
4. bon says:
Sorry if this is an obvious question (I am a graduate student), but I was wondering why for Nicolai’s conjecture showing that f*(d,n)=d(n-1)+1 for the case when the families of monomials actually contain only one monomial each isn’t enough.
If you have a set F_1,…F_t of families of monomials satisfying the given condition then I think you can construct a set a_1,…,a_t of monomials satisfying the given condition (with a_i in F_i):
Take a_1 in F_1 and a_t in F_t arbitrarily.
Then take a_2 and a_(t-1) arbitrarily from F_2 and F_(t-1) such that they satisfy the necessary gcd property given by a_1 and a_t.
Now choose a_3 and a_(t-2) arbitrarily such that they satisfy the gcd conditions from a_2 and a_(t-1). then they automatically satisfy the gcd conditions given by a_1 and a_t.
Thus we have a_1,..,a_t satisfying the gcd property. But this can’t be longer than d(n-1)+1 (I think this has been shown for monomials but if not I think you can show it by a certain lexicographic ordering (depending on a_1,….,a_n) on the variables x_1,….x_n.).
I’m sure I must be missing something but I thought I’d be bold.
In any event thanks a lot for sharing your guys work.
• Gil Kalai says:
Bon, it is not clear to me why a_2 will cetisfy the gcd condition for a_1 and a_3?
• Paco Santos says:
Bon, the reason why that does not work is that you may run out of monomials to be used in the next level because some variables have been abandoned in a previous step.
You may want to look at the following example. It does not have maximal length (Hahnle’s bound here would be 9 instead of 8) but it illustrates the point:
[{11}, {15}, {14, 55}, {12, 35, 44}, {13, 25, 45}, {24, 34}, {23}, {22}]
In the third level you need to choose between 14 and 55. The first choice means that you will not be allowed to use the variable 5 again, the second that you will not be able to use 1 again. You can check that there is no choice leading all the way to the final monomial 22.
The example has been adapted from this example by Jason Conti, but there may be simpler ones.
• Paco Santos says:
hum… the funny 8) was meant to be an “8″ followed by a “)”…
5. Paco Santos says:
Since we believe Nicolai’s nice conjecture, we believe that \$\latex f^*(d,n) = f^*(n-1,d+1)\$. Do we believe this formula has any significance or is it just a coincidence?
Certainly, the number of monomials of degree d in n variables equals the number of monomials of degree n-1 in d+1 variables. But the bijection(s) between monomials do(es) not seem to preserve convexity at all. In fact, the first problem is that in order to set up a bijection you need to order your variables, which do not come naturally ordered.
• Gil Kalai says:
This is a very curious fact.
6. Gil Kalai says:
Regarding Bon’s question. Perhaps we can generalize Nicolai’s conjecture from disjoint families of monomials to disjoint subspaces in the vector spaces of monomials. And then maybe for the generalized question we can hope that we can always reduce the dimension of the subspaces while keeping the convexity (yet to be defined) condition.
7. Yury Volvovskiy says:
I’ve been thinking about an approach that I’m not sure leads anywhere but for what it’s worth, here it is.
We study a function $f(d,n)$. Fix some number $\alpha$, $0<\alpha n\alpha$. Then there are three sub-cases:
(b1) $U_{t/2}$ intersects both $U_1$ and $U_t$. Let \$s\$ be the largest index such that the intersection of $U_s$ and $U_1$ is non-empty. Note that the intersection of $U_{s+1}$ and $U_t$ is non-empty too. Therefore, $t\le f(d-1,n-1)+f(d-1,n-d-1).$
(b2) $U_{t/2}$ intersects one of $U_1$ and $U_t$, say $U_1$. Then $t/2\le f(d-1,n-d-1)$.
(b3) $U_{t/2}$ intersects neither $U_1$ nor $U_t$. Let \$s\$ be the largest index such that the intersection of $U_s$ and $U_{t/2}$ is non-empty. We can assume that $\left|\cup_{j=s+1}^t U_{j}\right|\le \frac{1-\alpha}{2}\,n$ (otherwise we’ll look at the other side of the sequence). So
$t/2 \le f(d-1,n-d-1)+f\left(d, \frac{1-\alpha}{2}\,n\right)$.
In all cases we get some upper bound on $t$. One obvious question I don’t have an answer for is what is a good choice of $\alpha$?
• Yury Volvovskiy says:
Sorry, a part of my comment has disappeared for some reason. I meant to consider a number $\alpha$ (between o and 1), a sequence of length $t$ and two cases:
a) $\left|U_{t/2}\right|\le n\alpha$ and b) $\left|U_{t/2}\right|\ge n\alpha$. The three sub-cases of the latter case survived in the main comment.
In the former case, we can deduce that $t/2\le f\left(d, \frac{1+\alpha}{2}\,n\right)$, because one of the two halves of the sequence is supported on at most $\frac{1+\alpha}{2}\,n$ elements.
• Paco Santos says:
So, you get that f (d,n) is at most the maximum of the following four quantities (I use a instead of alpha to avoid latex):
a) 2 f (d, (1+a)n/2),
b1) f (d-1, n-1) + f (d-1, n-d-1),
b2) 2 f (d-1, n-d-1),
b3) f (d-1, n-d-1) + f (d, (1-a)n/2).
My only observation is that the maximum will always be either (b1) or (a): (b2) is smaller than (b1), and twice (b3) is smaller than (a) + (b2).
• Paco Santos says:
Upps, I forgot a factor of 2 in (b3), which gives t is at most 2f (d-1, n-d-1) + f (d, (1-a)n/2). In particular, (b2) is smaller than (b3) and the maximum lies between (a), (b1) and (b3).
• Yury Volvovskiy says:
Let me try to show the idea in a different setting where it might be more productive. Consider now non-uniform sequence of length $t$ on $n$ elements, and let’s look at the supports $U_{t/N}$ and $U_{(N-1)t/N}$, where $N$ is some number, potentially depending on $n$. If either of the two supports has size less than or equal to $n/2$ we can conclude that where are at most $3n/4$ elements on one side of it, and, therefore $t\le N\, f(3n/4)$. On the other hand, if both supports have size greater than $n/2$ then they have an element in common and thus $t\le \frac{N}{N-2} f(n-1)$.
Now, if we choose $N$ to be a constant than the first estimate is polynomial and the second one is exponential which is not interesting. But what if one sets $N=n$? Then the second estimate is linear and the first one gives something like $n^{\log n}$, which is much more interesting but still nothing new. There must be a yet better choice for $N$.
• Yury Volvovskiy says:
Well, I guess not. Looks like the best choice for $N$ is $N-2 = \frac{f(n-1)}{f(3n/4)}$ in which case we have $f(n)\le f(n-1)+2f(3n/4)$. That’s a worse estimate than what we already had.
• Yury Volvovskiy says:
It’s quite easy to upgrade the inequality to $f(n)\le f(n-1)+2f(n/2)$. All one has to do is to look at the union of supports from $U_1$ to $U_{t/N}$ as of course was done in the original argument. So I’m not sure any extra mileage can be extracted form this whole thing.
• Paco Santos says:
My general impression is that we know how to take advantage of a sequence being “thin” (case (a) in Yury’s post this morning) but we don’t know how to take advantage of it being “thick”.
One would expect that if, say, half of the elements are used all the way from t/4 to 3t/4 this fact should imply something stronger then t/2 \le f(n-1), which is implied already by a single element being used all the way from t/4 to 3t/4…
8. Olivier Bousquet says:
I have looked at the proof of the upper bound for $f^*(d,n)$ (ie Lemma 1 and Corollary 1 of Polymath3) and I had the following idea which does not quite work but may inspire others:
As usual, we start with a sequence of $d$-uniform multiset families $F_1,\ldots,F_t$.
Let’s partition the sequence $F_1,\ldots,F_t$ into intervals in the following way:
We first pick a random element say $x_1$ of the support of $F_1$ and denote by $I_1$ the interval $[1,i_1]$ where $i_1$ is the last index of an $F_i$ containing $x_1$.
Then we pick a new element $x_2$ in the support of $F_{i_1+1}$ and denote by $I_2$ the interval $[i_1+1,i_2]$ where $i_2$ is the last index of an $F_i$ containing $x_2$. And so on until the end.
By definition, after $i_k$, $x_k$ cannot appear anymore since when one element is removed from the support it cannot be added again.
This means that our sequence of intervals $I_k$ can contain at most $n$ elements.
Also, if we restrict the $F_i$ for $i \in I_k$ by excluding $x_k$, we get a convex sequence of $d-1$-uniform multiset families.
So we have as in the original proof $f^*(d,n)\le \sum_{k} f^*(d-1,|S_k|)$.
Now assume that we would be able to prove that $\sum_{k} |S_k| \le 2n-1$, then together with the fact that we have at most $n$ intervals, and using the induction hypothesis of $f^*(d-1,n)\le (d-1)(n-1)+1$, then we would obtain
$f^*(d,n)\le (d-1)(\sum_k|S_k| - n) + n \le (d-1)(2n-1-n)+n = d(n-1)+1$.
So this would give exactly the right bound, but there are two things that do not exactly work:
The first one is that this is valid only if we have exactly $n$ intervals.
The second one is that we need to prove that $\sum_k |S_k|\le 2n-1$ which is not as easy as in the original proof since we don’t have disjointness for non-consecutive intervals.
But the interesting point that this illustrates is that if one can decompose the sequence into exactly $n$ intervals with the property that each interval contains at least one common element and that $\sum_k |S_k|\le 2n-1$, then we can prove the bound $f^*(d,n)\le d(n-1)+1$. How realistic is such a construction?
• Olivier Bousquet says:
Another way to formulate the above is the following: if we can always decompose the sequence $[1,t]$ into intervals such that $\sum_k (|S_k|-1)\le n-1$ and $k\le n$ then we can prove Nikolai’s conjecture (I am using here the notation of Corollary 1 of Polymath3). Note that since the intervals satisfy the condition that the $F_i$ in intervals $I_k$ don’t contain the common element in $I_\ell$ for any $\ell < k$, then $k$ has to be smaller than $n$. So only the sum condition needs to be verified.
It seems that this is the case (on some simple examples I looked at), provided one can construct the sequence without being forced to start at one end. So in other words, we need to find a decomposition which minimizes $\sum_k (|S_k|-1)$, and on some examples this seems to be possible and yield the desired properties.
9. Paco Santos says:
I can now show that f(5) is at most 12. Since we have Conti’s example of length 11, f(5) must be 11 or 12. In fact, as a byproduct of my proof I have also found a second example of length 11: [{}, {1}, {12}, {125}, {15, 25}, {135, 245}, {145, 235}, {35, 45}, {345}, {34}, {4}].
Suppose we have a sequence of length 13 on 5 elements. Wlog the first or last level consists only of the empty set, so we have a sequence of length 12 with no empty sets. Then:
- F_1 \cup F_2 \cup F_3 already use at least three elements: if not, they form the unique sequence of length three with two elements and no empty set, namely [{1}, {12}, {2}]. But in this case the element {1} has already been abandoned in F_3, so it will not be used again. This means that F_3 … F_12 forms a convex sequence of length 10 in four elements, a contradiction.
- With the same argument, F_10 \cup F_11 \cup F_13 use at least three elements. In particular, F_3 and F_10 have a common element, say 5, so restricting F_3,…,F_10 to the sets using 5 we have a sequence of length 8 on the other four elements. So far so good, since f(4)=8.
- But this would imply that in the restriction we can assume wlog that F_3={\emptyset}. Put differently, F_3 contains the singleton {5}. Since F_3 is the first level using 5, this singleton could be deleted from F_3 without breaking convexity. This gets us back to the case where F_1\cupF_2\cup F_3 use only two elements, which we had discarded.
• Paco Santos says:
I forgot to mention how I got the “byproduct”. My proof gives quite some information on what a sequence of length 12 with five elements should look like. Leaving aside the level with the empty set, which we assume to be F_12:
- F_1 and F_2 use only two elements and F_2 uses both of them. That is, wlog our sequence starts either [{1}, {12}, ...] or [{12}, {1}{2}, ...]. In fact, if the second happens we can swap F_1 and F_2 and then remove from F_1 one of the two singletons. So, wlog [F_1,F_2] = [{1},{12}]
- Same argument for F_10 and F_11. Wlog [F_10,F_11]=[{34},{4}]
– All of F_3,…,F_9 use 5 and none of them contains the singleton {5}. That is, restricting them to 5 we have a sequence of length seven with no empty set.
One way of constructing F_3…F_9 (and maybe the only one, although I don’t have a proof) would be a sequence of length seven on four elements that starts with {12} and ends with {34}. I did not find that, but I found (more precisely, we know since some weeks ago) one of length six: [{12}, {1, 2}, {13, 24}, {14, 23}, {3, 4}, {34}]. Joining it to 5 and then adding the head and tail of length two plus the empty set gives the sequence of length 11:
[{1}, {12}, {125}, {15, 25}, {135, 245}, {145, 235}, {35, 45}, {345}, {34}, {4}, {}]
• Paco Santos says:
Hum, when I said “maybe the only one” I was too quick. Jason Conti’s sequence was constructed differently. Relabeling it to better match my notation his sequence is
[{1}, {12}, {125}, {15, 25}, {135, 2}, {13, 35, 24}, {5, 23, 14},{3, 45}, {34}, {4}, {}]
The head and tail are indeed as in my proof [{1}, {12}, ..., {34}, {4}, {}] (in fact, the arguments in the proof imply every sequence of length more than ten can be easily modified to have precisely that head and tail).
But the restriction to 5 is different and finishes with the singleton 4: [{12}, {1,2}, {13}, {3}, {}, {4}]. Joining this to 5 and simply adding the head and tail does not give a convex sequence, because the subsequence has abandoned the element 3 and we use it again in the tail, but convexity is restored by extra sets in the central part not using 5.
10. Paco Santos says:
What would be really nice is to use the arguments in the proof of $f(5) \le 11$ to strengthen the recursion $f(n) \le 2f(n/2) + f(n-1)$.
Part of the argument is that this formula overcounts the empty set not once (which gives the -1 in the wiki) but twice. That is not important asymptotically. The other part vaguely says:
“For f(n) to be close to 2f(n/2) + f(n-1) we need our sequence to consist of:
- A head and a tail of lengths close to f(n/2), with disjoint supports.
- A central part of length close to f(n-1) and using an element all the way.
But if the three subsequences have length close to maximal then their ends will be too thin for us to be able to glue them together preserving convexity”.
The challenge is to make this vague argument more precise…
• Gil Kalai says:
Here is some idea in this direction. Lets consider f(d,n) (or $f^*(d,n)$).
Think about n as much larger than d. Our argument is based on reaching the same ‘m’ after maling f(d,n/2) moves from both ends. The point is that the union of all sets in the first s+1 families where s=f(d,u) is larger than u.
Suppose that we want to replace f(d,n/2) by SUM f(d-i, n/1000)
which we can replace by d f(d,n/1000). If we can do it this will decrease the
constant in the exponent n^logd. It sounds appealing. we reach n/1000 m’s then when we fix m and consider only sets containing it we reach n/1000 new m’s and we continue in all possible ways. It seems that in df(d,n/1000) we will reach many more elements of our original ground set unless there is some structure to our family that we can expolit in another way.
11. Yury Volvovskiy says:
I tried to follow Gil’s advice and think about the properties that ensure $f(n)\le f(n-1) + 2f(n/2)$ inequality, or the $f(n)\le n^{\frac{1}{2}\log_2 n }$ upper bound. Looking at the two proofs (the original one [see Wiki], and mine, which is more complicated but gives the upper bound a bit more explicitly) one can see that we only look at the supports of the families and therefore do not explicitly use the fact that families are disjoint (the supports aren’t). So here’s the idea:
Let’s look at legal sequences of subsets of $[n]$ defined inductively as follows:
1. The only legal sequence on $0$ elements is $\{\emptyset\}$.
2. Any legal sequence on $n-1$ elements is also a legal sequence on $n$ elements.
3. A sequence $\{S_1,\,S_2,\dots,\,S_k\}$ on $n$ elements is legal if and only if
3a) every proper subsequence is legal (there are two possible versions of this rule: the less restrictive one only requires that intervals are legal, the more restrictive – that all subsequences are. The difference can be demonstrated by the sequence $\{\emptyset,\{1\},\emptyset\}$ which is legal in the former sense but not the latter)
and
3b) if an element $a$ belongs to every $S_i$ then there are subsets $S_i^{*}\subset S_i\setminus\{a\}$ such that $\{S_1^*,\,S_2^*,\dots,\,S_k^*\}$ is a legal sequence on $n-1$ elements.
The $n^{\frac{1}{2}\log_2 n }$ upper bound for the length of a legal sequence is proved by the exact same argument. The question is if it’s possible to construct a legal sequence of super-polynomial length.
• Yury Volvovskiy says:
I was sure I forgot something: obviously a legal sequence must be convex.
• Yury Volvovskiy says:
Here’s a quadratic example in the case where only intervals are required to be legal:
$\emptyset$
$\emptyset, 1, \emptyset$
$\emptyset, 1, 12, 12, 2,\emptyset$
$\emptyset, 1, 12, 123, 123, 123, 123, 23, 3,\emptyset$
$\emptyset, 1, 12, 123, 1234, 1234, 1234, 1234, 1234, 1234, 1234, 234, 34, 4,\emptyset$
The length of such a sequence is $(n+1)(n+2)/2$.
• Yury Volvovskiy says:
The same idea works for the case when all subsequences must be legal:
$\emptyset$
$\emptyset,1$
$\emptyset,1,12,2$
$\emptyset,1,12,123,123,23,3$
$\emptyset,1,12,123,1234,1234,1234,1234,234,34,4$
The length is $1 + n(n+1)/2$.
• Gil Kalai says:
Yuri, I am a little confused. Do you have an example (of sets) showing that f(n) is at least (n+1)(n+2)/2?
• Paco Santos says:
To answer Gil’s question: “Do you have an example (of sets) showing that f(n) is at least (n+1)(n+2)/2?”
The answer is “No”. What Yury has is a more general model, hence giving rise to a new function y(n) which is guaranteed to be at least as big as f(n). What Yury’s example shows is that $y(n) \ge (n+1)(n+2)/2$, which does not directly say anything about f(n).
But the hope is that it might be easy(-er) to analyze (and maybe find polynomial upper bounds for) y(n) than f(n), since Yury’s model is based in forgetting -at least partially- the interaction between the different elements used in each level.
I have to say I was very excited when I read his posts. At first it looked obvious to me that $y(n) \le y(n-1) +n$. This lasted until I tried to actually prove this inequality…
• Gil Kalai says:
Dear Yury and Paco, I am not sure I understand what precisely $y(n)$ is. And also why f(n) is smallet or equal than y(n). Can you explain again?
• Yury Volvovskiy says:
Dear Gil,
Let me try to give a formal definition. Let $R_n$ be a collection of sequences of sets $\{S_1,S_2,\dots,S_k\}$, $\left|\cup S_i\right|\le n$, satisfying following conditions:
1. $R_0$ consists of the only sequence $\{\emptyset\}$.
2. Convexity: $S_i\cap S_j\,\subset S_k$ for $i<k<j$
3. $R_{n-1}\subset R_n$.
3'. If a sequence $\{S_1,S_2,\dots,S_k\}$ is in $R_n$, but $\left|\cup S_i\right|< n$ then it is also in $R_{n-1}$.
4. Any subsequence of a sequence in $R_n$ is also in $R_n$.
5. Induction: if there's an element $a$ common to all the sets in a sequence $\{S_1,S_2,\dots,S_k\}$ in $\{S_1,S_2,\dots,S_k\}$, then there must exist sets $S_i^*\subset S_i\setminus\{a\}$ such that $\{S_1^*,S_2^*,\dots,S_k^*\}$ is in $R_{n-1}$.
Then $y(n)$ is defined as the maximal length of a sequence in $R_n$. It is at least as large as $f(n)$ because given a convex sequence of families of subsets of $[n]$ the sequence of the supports of these families is in $R_n$.
By definition $y(0)=1$. The upper bound $y(n)\le n^{\frac{1}{2} log_2 n }$ is obtained by the argument from the Wiki : one uses properties 3 and 4 to separate opening and closing subsequences on $n/2$ elements, then uses convexity (property 2) to show that the subsequence in the middle has a common element and then uses induction (property 5) to show that its length is bounded by $y(n-1)$ recovering the estimate $y(n)\le y(n-1)+2y(n/2)$.
• Klas markström says:
A quick question before shutting down for the evening. How does this compare to my broken example from the first thread?
http://gilkalai.wordpress.com/2010/09/29/polymath-3-polynomial-hirsch-conjecture/#comment-3422
• Paco Santos says:
1) I have been thinking about Yury’s model and I think the following is true: “there is no loss of generality in assuming that the intervals where the different elements are active are never properly nested to one another”.
The proof is as follows (please Yury, check if I understood things correctly): suppose we have two elements i and j such that i appears strictly earlier than j and disappears strictly later than j (this is what I mean by “properly nested”). Let t_j and t_i denote the last levels where i and j are used. Then it seems to me that changing i to j in all the levels from t_{j+1} to t_i still produces a valid sequence, since the restriction of it to either i or j is a subsequence (actually, an interval) of the original one restricted to i, and for the restrictions to the rest of elements we can use induction.
2) If (2) is true it means the following: there is a natural order of the elements such that they all appear and disappear according (weakly) to this order. I say weakly because two elements may appear and/or disappear at the same time.
3) Now, (2) implies a further simplification of the model. We do not need to remember the actual elements used in each level, but only the number of them. The sequences are no longer sequences of subsets of [n] but sequences of numbers. The actual subsets can be derived from the numbers.
For example, the sequences in Yury’s last example become:
[0]
[0,1]
[0,1,2,1]
[0,1,2,3,3,2,1]
[0,1,2,3,4,4,4,4,3,2,1]
What is not clear is what Yury’s axioms become in this simplified model. (In particular, it is not completely clear that my model is truly a simplification; the sequences are simpler but the definition of validity is more intricate).
• Paco Santos says:
Typo: where it says “2) If (2) is true…” it should say “2) If (1) is true…”
• Paco Santos says:
My part (3) is not completely true. In the same step some elements may appear and some others disappear from the support, and that is not detected by remembering only the cardinality of the support. Tu give a concrete example, the sequence [0,1,2,3,3,2,1] can represent Yury’s sequence on 3 elements [0, 1, 12, 123, 123, 23, 3] but it may also represent the following sequence on four elements: [0, 1, 12, 123, 234, 34, 4].
But I still believe (1) and (2) are true in Yury’s model. Put differently, I think in Yury’s model there is no loss of generality (or rather, there is no loss of length) in assuming that all the S_i’s are intervals.
• Yury Volvovskiy says:
This is easy to deal with. If an element $i$ disappears on the same step when an element $j$ appears in the set, we can simply add $i$ to this set. When doing restriction by $i$ we trim the last set down to $\{j\}$ and vice versa.
Another observation is that if $j$ only appears together with $i$ one can add $j$ to all the sets that contain $i$. That’s a slightly easier way to show that the intervals are not nested – if they are, they can actually be assumed to coincide. So if two elements appear in the sequence at the same time, they also disappear at the same time.
• Yury Volvovskiy says:
On a second thought, Paco’s way of getting rid of properly nested elements is better than mine because it survives restriction. The property of not having one elements appear and another one disappear in the same step also survives restriction.
That gives some hope for Paco’s proposed simplification. Validity remains convoluted though: one has to say that the sequence has height $n$ if the sum of all up-jumps is $n$ (that corresponds to the number of elements in the original formulation). Then we need to require that any subsequence between an up-jump and the corresponding down-jump (appearance and disappearance of an element) strictly dominate a legal sequence of height at most $n-1$.
Example: height 3 sequence [0,1,2,3,3,2,1] is good because subsequences [1,2,3,3] and [2,3,3,2] strictly dominate a good 2-sequence [0,1,2,1] and subsequence [3,3,2,1] strictly dominates another good 2-sequence [1,2,1,0].
• Gil Kalai says:
The (emerging) understanding that there is (without losing generality) some natural ordering on the elements, and perhaps also (that we may assume without losing generality) that this (weak) ordering is respected by restrictions look to me as giving hope for better upperbounds. (But I cannot explain why I think so.)
(By restrictions I mean: moving to legal sequence after we deleted a common element in a certain interval. I am not sure this is what Yuri means and I am not sure if we indeed can assume that the ordering respect restrictions.)
• Paco Santos says:
Gil said: “I am not sure this is what Yuri means and I am not sure if we indeed can assume that the ordering respect restrictions.” I think the answer to both parts is “yes”. Let me prove the second (only Yury can prove or disprove the first).
Let us call a valid sequence of sets (valid in the sense of Yury) *monotone* if it satisfies my additional axiom that the elements appear in order and disappear in order. Put differently, if every \$S_i\$ is an interval of the form \$[l_i, r_i]\$ and both the sequences of \$l_i\$’s and \$r_i\$’s are (weakly) monotone. Of course, implicit in the definition of monotone is a prescribed ordering of the elements.
“Respected by restrictions” means:
Lemma: If a sequence is valid and monotone, then the valid restrictions of it with respect to every element can be chosen monotone.
Proof. If the original sequence \$S\$ is monotone, the operation of “taking only the sets containing a certain k and removing k from them” gives a (perhaps not valid) monotone sequence, which I will denote \$S/k\$. The original sequence being valid means this monotone sequence \$S/k\$ contains a valid sequence \$T\$ (“contains” in the sense of Yuri’s axiom 5).
Suppose \$T\$ is not monotone. Then, there are indices \$i<j\$ such that either \$j\$ appears or disappears before \$i\$. Do the following:
- if both things happen, exchange \$i\$ and \$j\$ all throughout \$T\$.
- if \$j\$ appears before \$i\$, add \$i\$ to all the sets from the appearance of \$j\$ to the appearance of \$i\$ (or change all \$j\$'s to \$i\$'s in those sets, both operations work).
- if \$j\$ disappears before \$i\$, add \$j\$ to all the sets from the disappearance of \$i\$ to the appearance of \$j\$ (or change all \$i\$'s to \$j\$'s in those sets, both operations work).
All these operations preserve validity and give sequences contained in the "monotone envelope" of \$T\$, which means they are still contained in \$S/k\$. Doing them on and on will eventually lead to a monotone sequence contained in \$S/k\$. QED
Incidentally, in this post I stated that “monotone” was equivalent to “every \$S_i\$ is an interval”. That is not enough, as the sequence [2,12,23] shows. We need the extra condition that the sequences of extrema of the intervals are monotone.
• Gil Kalai says:
This is vey interesting, Paco. As I said, having a natural ordering on the elements which is preserved under restrictions looks useful to me for improving the upper bounds. But I cannot really justify this feeling.
12. Jason Conti says:
I found a way to transform another example of f(5) that is easy to generalize to any n (of length 2n) into the example for f(5) of length 11. I think we may be able to use a variation to generate longer sequences with larger n.
F_1 = {}
F_i for 2 >= i >= 2n
* contains {j, k} for j + k = i, 1 <= j, k = i
For n = 5:
[{}, {1}, {12}, {2, 13}, {23, 14}, {3, 15, 24}, {34, 25}, {35, 4}, {45}, {5}]
Merge(8)
[{}, {1}, {12}, {2, 13}, {23, 14}, {3, 15, 24}, {34, 25}, {345}, {45}, {5}]
Add({35}, 7)
Remove({25}, 7)
[{}, {1}, {12}, {2, 13}, {23, 14}, {3, 15, 24}, {34, 35}, {345}, {45}, {5}]
Add({25}, 5)
Swap(5, 6)
[{}, {1}, {12}, {2, 13}, {3, 15, 24}, {23, 14, 25}, {34, 35}, {345}, {45}, {5}]
Insert({235, 4}, 7)
[{}, {1}, {12}, {13, 2}, {15, 24, 3}, {14, 23, 25}, {235, 4}, {34, 35}, {345}, {45}, {5}]
The reason I think this may help, is that as n gets larger, more opportunities open up to add additional families. For n = 7, the procedure above can be performed on both ends of the sequence, yielding a sequence of length 16:
[{}, {1}, {12}, {123}, {13, 23}, {134, 2}, {14, 25, 34}, {15, 24, 3}, {17, 26, 35, 4}, {37, 46, 5}, {36, 45, 47}, {457, 6}, {56, 57}, {567}, {67}, {7}]
(I still think an example of f(6) of length 14 exists, but the above procedure didn’t quite work) My hope is that this can be improved on as n gets larger, by including longer sets in the families.
Another idea, that is somewhat related, is to take a sequence with length > 2n, duplicate, mirror, replace the symbols with a disjoint set and remove the empty set family, and then joining them together with a sequence of families containing symbols from both. This is related if one considers a general form of sequences of length 2n + 1:
[{}, {1}, {12}, {13, 2}, {15, 24, 3}, {14, 23, 25}, {235, 4}, {34, 35}, {345}, {45}, {5}, {56}, ... , {(n-1)n}, {n}]
Duplicate, mirror and join it with the family {n(n+1)}, and the interior families can be converted to a set of families that is similar to the initial sequence at the start of this post (using the leftover subsets), that may be possible to extend with the above operations (working on an example for f(14), but got sidetracked).
Using the example for f(5) of length 11, and just the basic family {n(n+1)}, the above will yield a sequence length of 11n/5, for n=5,10,20,40, …, which really isn’t great. Perhaps modifying the end of the first sequence, the start of the second sequence, and a clever choice of joining families that increase with n each iteration could improve it.
13. Gil Kalai says:
This comment is in reply to Yury’s explanation of the new even more abstract version which abstrat the properties of supports of our families ${\cal F}_1,\dots,{\cal F}_t$.
Dear Yuri, I see, this is very interesting!! I will also think about the version of Paco. This looks like it should be an easier question. (But we often had this feeling before.)
14. Yury Volvovskiy says:
I discovered that $y(n)$ can be larger than $(n+1)(n+2)/2$. For $n=5$ there’s a sequence of length $22$. Here it is
$\emptyset,1,12,12,123,1234,1234,12345,12345,12345,12345,12345,12345,$
$2345,2345,2345,2345,345,345,45,5,\emptyset$
The reductions are:
1: $\emptyset,2,\emptyset,3,34,34,345,345,45,45,5,\emptyset$
2: $\emptyset,1,13,134,134,134,1345,1345,1345,345,345,345,45,5,\emptyset$
3: $\emptyset,1,12,124,124,124,1245,1245,1245,245,245,245,45,5,\emptyset$
4: $\emptyset,1,12,12,123,123,1235,1235,235,235,235,235,35,5,\emptyset$
2: $\emptyset,1,12,12,123,123,23,23,234,234,34,34,4,\emptyset$.
That’s sort of interesting since I was for some reason pretty sure that the quadratic upper bound should work. It still might but I’m not so sure anymore.
Another question (I didn’t have time to think about it) is whether it’s possible to give an upper bound for $y(n)$ in terms of $f(p(n))$, where $p(n)$ is a polynomial.
• Paco Santos says:
How can you have two empty sets in your sequence? This is forbidden by axiom 4, (together with 3′ and 1) unless you are using the version of axiom 4 that says “subintervals” rather than “subsequences”, as mentioned in your initial post…
• Yury Volvovskiy says:
Yes, that’s the version I’m using – somehow it seems more convenient, although I don’t think there’s an essential difference between the two. The $(n+1)(n+2)$ upper bound appeared in the context of that version too.
15. Paco Santos says:
I am not sure this helps, but in Yury’s model there is an analogue of d-uniform sequences: Define collections $R_{d,n}$ rather than $R_n$, and modify the induction axiom 5 to read something like “The restriction of a sequence in $R_{d, n}$ to any element \$k\$ is a sequence in $R_{d-1,n-1}$“. Put also the additional axiom “6. Every \$S_i\$ has at least d elements”. (This extra axiom may not be strictly necessary to get something sensible, but it seems natural).
If we denote y(d,n) the maximal length of sequences in $R_{d,n}$ so obtained it is pretty easy to show that:
- $y(1,n)=n$ (if d=1, no element can appear in two different S_i’s; on the other hand, the sequence [1, 2, 3, ..., n] is valid).
- $2n-4 \ge y(2,n) \ge 2n-3$. The upper bound is as in the f(d,n) case and the lower bound follows from the sequence: [12, 123, ... , 123... n-2, 123 ... n-1 n, 123 ... n-1 n, 3...n-1 n, ... , n-2 n-1 n, n-1 n]. I think that $y(2,n) = 2n-4$.
sequence so obtained it is pretty easy to show that y(1,n)=n
• Gil Kalai says:
This looks helpful. I suppose we would like to think about strategies for proving better upper bounds for y(n) and y(d,n), perhaps even proving that y(d,n) is at most d(n-1)+1 (say) or just a similar bound for y(n) (or for similar functions extended to monomials/multisets; which we did not look at) .
16. Gil Kalai says:
It certainly looks to me that the best way to bound y(n); y(d,n) would be some clever direct combinatorial argument. (But I dont know what it will be.)
Still going back to the linear-algebraic suggestion and Nicolai’s basic example, we may think of something like that: Yury and Paco defined (reccursively) what is a d-legal sequence of an n element sets; we have an example of a d-legal sequence of subsets of {1,2,…,n} and we want to bound its length t.
(A reminder: the translation from families to such d-legal sequences is done by looking at the supports of the ith families i=1,2,…,t. Yury formulated a set of axions for these supports which still leads to the upper bound that we knew, and this setting look much more abstract and general, and Yury also showed some examples.)
We also may assume (I think this is what Paco demonstrated) and use that the ordinary ordering 1,2,…, n is compatible with the ordering of this legal sequence and all restrictions.
So we start with n variables x_1,…,x_n and consider n linear combinations of the x_i’s
called y_1,…,y_n
We consider the d(n-1)+1 special monomials in the x_i s that came from Nicolai’s example; just all monomials of degree d involving 2 consequtive variables. Lets call them N-monomials.
The claim we would like to have is this: we can chose degree d monomials , the ith supported on the variables corresponding to S_i such that expressed in terms of the N-monomials (and neglecting all other terms) we get linearly independent polynomials.
Somehow, I feel that taking the matrix transforming the x_i sto the y_j s to be triangular (w.r.t. the natural ordering of the x_i s that we talked about) will give an inductive argument for linear independence a better chance.
17. Gil Kalai says:
We had a few days of silence. Personally, I am very encouraged by the new level of abstraction that Yury considered and the subsequent observations by Yury and Paco and I hope some people are thinking about it. I plan not to wait for 100 comments on this post but rather to write this weekend a new post briefly describing these developments.
• Paco Santos says:
As Gil, I feel quite optimistic that Yury’s simplified model might lead to something new. One idea that comes to my mind is to try to understand the “maximal” sequences [S_1,...,S_t] valid in Yury’s context. By maximal I do not mean that \$t\$ is maximal (within the sequences with a given number \$n\$ of symbols) but rather that no element can be added to any \$S_i\$. Observe that inserting an element \$k\$ to an \$S_i\$ “helps” the restriction with respect to every element other than \$k\$ to be valid, so the only problem for insertion is the restriction to \$k\$ itself.
For example, some of the things Yury and I have said so far imply:
1) In a maximal valid sequence, if the symbols {1, 2,…,n} are permuted so that they appear in weak monotone order (that is, the first appearance of each \$k\$ happens before or at the same time as that of \$k+1\$) then the symbols also disappear in order. That is, every maximal sequence is “monotone” in the sense of this post (modulo permutation of the symbols). Even more so, if two elements appear at the same time in a maximal sequence then they also disappear at the same time, and vice versa.
2) In a maximal valid sequence each \$S_i\$ either contains or is contained in the next \$S_{i+1}\$. This follows from this argument of Yury.
As a corollary from (1) and (2), a maximal valid sequence can be completely recovered from the sequence of cardinalities of the \$S_i\$’s.
One problem with this approach is that it is not clear whether maximality is preserved by restriction. That is: can the restrictions of a maximal sequence with respect to all the elements be taken maximal? Probably not…
18. Yury Volvovskiy says:
I find it convenient to work in the world according to Paco’s simplification where you replace a set by a single number, the cardinality of the set. That allows a geometric interpretation where instead of a sequence you consider the graph of the cardinality function $N(k)$, which is the cardinality of the $k$th set of the sequence. The restriction (induction) condition means that you can nest smaller graphs inside the big one.
It looks like it could be interesting to study symmetric graphs. For instance, my first example for $n=4$
$\emptyset,1,12,123,1234,1234,1234,1234,1234,1234,1234,234,23,4,\emptyset$
in Paco’s notation would look like
$0,1,2,3,4,4,4,4,4,4,4,3,2,1,0$.
All 4 restrictions could be made the same, given by the sequence $0,1,2,3,3,3,3,2,1,0$.
Here’s a bit more convoluted example: consider a legal sequence
$s= \left\{0,1,2,2,3,4,5,5,5,5,5,5,5,4,4,4,4,3,2,1,0\right\}$
of height 5 and length 21 and another sequence
$S = \left\{0,1,2,3,3,4,5,6,6,\dots,6,6,5,4,3,3,2,1,0\right\}$
of height 6 and length 29. The interesting thing is that all the restrictions of $S$ can be chosen to be either $s$ or its mirror image.
If I’m right (I haven’t fully convinced myself yet) a similar construction exists for larger heights and it improves the lower bound from $n^2/2$ to $5n^2/8$.
• Yury Volvovskiy says:
To give a bit more color: $S$ is the smallest sequence that covers $s$. It starts with $0,1$, then it grows every time $s$ grows and stays level when $s$ stays level or drops. The first time $S$ drops is when $s$ ends (that corresponds to the first element leaving the support). Then we make our descent from 5 back to 0 symmetric to ascent from 0 to 5 in the beginning of the sequence.
19. Paco Santos says:
On the path of simplifying the model again and again, but keeping the proof of $2^{O(\log^2 n)}$ valid, I would propose Yury’s axiom 5 (recursion) to read simply:
5′. Induction: if there’s an element common to all the sets in a sequence, then the length of the sequence does not exceed the maximum length of a sequence on $n-1$ elements.
Put differently, the interval on which a certain \$k\$ is active cannot be longer than \$y(n-1)\$.
20. Gil Kalai says:
This is even more general. Maybe this will allow an example of non polynomial length?
• Paco Santos says:
That might be the case, but it would be interesting. In a sense, what we are doing with these iterated generalizations/simplifications is exploring the “limit of abstraction”. What I like about the new model is it seems more suited to computer experimentation; to compute s(n+1) (if I may denote this way the function obtained with this model) you do not need to remember all the valid sequences you got for s(n), just the actual value of s(n).
• Yury Volvovskiy says:
It seems to allow a rather trivial superpolynomial construction. The sequence of length $s(n-1)$ whose every element is $[n]$ is legal. So is the sequence of length $s(2n)$ whose first $s(n-1)$ elements are $[n]$ and the rest are $[2n+1]$. Finally by adding to this sequence $s(n-1)$ elements of the form $[2n]\setminus [n]$ we get a legal sequence on $2n+1$ elements of length $s(2n)+s(n-1)$. Am I missing something?
• Paco Santos says:
I think you are right.
At least this clarifies a point. In my attempts to prove that your y(n) is polynomial I think I have always been using your axiom 5 in my simplified form. Now I know that this could not work…
21. Nicolai Hähnle says:
I’ve been trying to wrap my head around Yury’s proposed y(n)-abstraction. I think I understand it, but let me just write out some things that have been bothering me. In particular, given a sequence of subsets of [n], how does one check whether that sequence is valid? Let me try to write it out in some form of pseudo-code.
Input: sequence S_1, …, S_t, $U = \bigcup S_j$
Output: Yes or No
1. If U is empty (corresponding to n=0), output Yes if $t \leq 1$, else output No.
2. If the sequence is not convex, output No.
3. For all proper sub-sequences, perform the check recursively. If any of the recursive checks return No, return No.
4. If there is an element x common to all sets S_j, then create $T_j = S_j \setminus \{ x\}$. For all sequences that can be obtained by taking (not necessarily proper) subsets of $T_1,\ldots,T_t$, perform the check recursively. If all of these recursive checks return No, return No.
5. If we reach this point, all tests have passed and we return Yes.
It seems that instead of checking all possible combinations of subsets in step 4, we can “annotate” sequences of sets by the reduction induced by each element. Then we can simply check in step 4 whether the “annotated reduced sequence” is valid. This simplifies the checks a lot.
Is it correct that those reduced sequences can be chosen to be “commutative”? What I mean by this is that it should not matter whether we do “induction” first on x, then on y, or first on y, then on x.
I was thinking whether the linear algebra approach could work for this new abstraction. It seems much more reasonable to hope that we can associate some object to each element of the sequence, and the quadratic bound makes me think of associating matrices to the sets in the sequence.
I’ve tried some approaches of associating matrices with sets in the reduced sequences, and then combining those matrices somehow to get to matrices associated with the sets in the entire sequence. I would like to then find some statement of the form: if I have a non-trivial combination of the matrices yielding 0, then the same should be true for one of the reduced sequences. I was not successful so far, perhaps because I didn’t find a good way to use convexity.
• Paco Santos says:
A couple quick comments:
- In point 3, I think you do not need all the sub-sequences. It seems to me it is enough to check the maximal sub-sequences containing each x, plus checking that you don’t have too many occurrences of the empty set. (Too many means: only one empty set if you take the full “sub-sequence” axiom, no two consecutive empty sets if you only take the “sub-interval” axiom).
- Concerning “Is it correct that those reduced sequences can be chosen to be “commutative”? What I mean by this is that it should not matter whether we do “induction” first on x, then on y, or first on y, then on x.” I think the answer is: If you assume commutativity you recover the original model that gives f(n). My argument: if you assume commutativity, for each subset $S\subseteq[n]$ you can define the “active interval” of \$S\$ to be the interval where the sequence annotated to \$S\$ happens. Define then \$R_i\$ to be the family of maximal subsets \$S\$ that are active at time \$i\$. The \$R_i\$’s so obtained form a convex sequence of families of sets, except a priori some set \$S\$ may appear repeated in two different \$R_i\$’s. Now, if that happens, then the sequence annotated to that set \$S\$ contains the empty set two (or more) times, which is impossible (with the strong form of the sub-sequence interval).
• Nicolai Hähnle says:
Dear Paco,
yes, I agree that far too many checks are applied in what I originally wrote.
The observation about the “commutative” is very interesting! Your argument looks good to me. Seems like commutativity bites itself with your monotonicity property though. I think one can make a valid commutative sequence monotone while keeping the commutativity, but I believe there will be conflicts if one tries to get monotonicity in the recursions. Or can these be fixed?
I was considering the following line of attack – which I am much less optimistic about now because of this conflict, but I’m going to write it down anyway.
Build a directed acyclic graph on vertices s, n, n-1, …, 2, 1, t with all possible arcs going from left to right. Assign to every set an s-t-path in this DAG in the following way. The first arc is from s to the largest element of the set. The next arc is from there to the largest element in the “recursed” set, etc., until the empty set is reached, which is indicated by an arc to t.
The nice property here is that the dimension of the space of s-t-flows is exactly the number of sets in Yury’s examples, and I was looking for ways to prove linear independence. It is clear that the same path cannot occur twice, because then we would have a way to recurse down to a sequence containing two empty sets (I am assuming the stronger formulation of all subsequences, not just subintervals). To argue that other sequences of paths are impossible I wanted to use commutativity.
The question is whether commutativity or monotonicity is the more helpful property for proofs. Strictly speaking, commutativity must be stronger if it allows to recover f(n) (which we already know to be strictly smaller than y(n), if I remember the case n=3 correctly).
• Paco Santos says:
You remember correctly about the \$n=3\$ case. We know f(3)=6, achieved by the sequence [0,1,12,123,23,3], and we know y(3)=7, achieved by the sequence [0,1,12,123,123,23,3].
This is actually a nice example of why commutativity cannot be assumed in the y model. Starting with this length seven sequence, the only valid restriction at 1 is [0,2,23,3] and the only valid restriction at 3 is [1,12,2,0]. Restricting the first at 3 gives [2,0] while restricting the second at 1 gives [0,2].
• Nicolai Hähnle says:
After a good sleep, I realized that my DAG idea doesn’t work in the y(n) model, even with monotone sequences. Consider the sequence [12,12,123,3], with restriction [0,2,23] at 1, [0,1,13,3] at 2, and [12,2] at 3, I would assign paths s-2-t, s-2-1-t, s-3-2-1-t, s-3-2-t, and those paths are linearly dependent.
22. Paco Santos says:
Hello everyone,
I am afraid I can show that $y(4n) \ge n y(n)$, which implies a super-polynomial lower bound. The exact inequalities I prove, which eventually give the one above, are:
y(2n+2) \ge 2 y(n),
y(2n+4) \ge 3 y(n),
y(2n+6) \ge 4 y(n),
y(2n+8) \ge 5 y(n),
y(2n+10) \ge 6 y(n), …
… and so on.
For the first one, we simply observe that the sequence with $y(n)$ copies of [n+1] is valid on $n+1$ elements, and use two blocks of it to show $y(2n+2) \ge 2y(n)$. Since this “blocks” idea is crucial to the whole proof, let me formalize it a bit. I consider my set of $2n+2$ symbols as consisting of two parts $A$ and $B$ of size $n+1$, and my sequence is $[A, A, ..., A, B, B, ...., B]$, with a first block of $A$‘s of length $y(n)$ and a second block of $B$‘s of the same length.
Now, I increase my set of symbols by two, putting one in $A$ and one in $B$. Then I can construct a valid sequence with *three* blocks of length $y(n)$ each: a first block of $A$‘s, a second block of $A\cup B$‘s and a third block of $B$‘s.
But if I put one more symbol to $A$ and to $B$, so that I now have $2n+6$ in total, I can build a valid sequence with *four* blocks of length $y(n)$: a first block of $A$‘s, a second and third blocks of $A\cup B$‘s and a fourth block of $B$‘s.
And so on…
• Gil Kalai says:
Very nice!!!
• Yury Volvovskiy says:
nice construction!
23. Nicolai Hähnle says:
This is quite remarkable, Paco! If I am not mistaken, this gives at least $y(4^k) \geq 4^{k(k-1)/2}$ or $y(n) \geq 2^{\log(n)^2/8-\log(n)/4}$ if one starts at $n=1$. That is quite close to the upper bound.
• Paco Santos says:
Yes; this settles the complexity of y(n) to be $2^{Theta(\log(n)^2)}$.
So I guess this means we go back to \$\latex f\$ and \$\latex f*\$, and the question is what did we learn from \$\latex y\$.
One thing we learnt is that we can model \$f(n)\$ by Yury’s axioms together with commutativity of the restrictions. Another thing is that keeping track only of the intervals when individual elements are active will not be enough to prove polynomiality of \$f(n)\$.
At least one thing is true. Something in the vein of my construction will not work for \$f\$, since the “blocks” will have a lot of fine structure inside and you cannot glue them to one another so freely.
24. Gil Kalai says:
We can maybe formulate intermidiate problems regarding the ith shaddows of our families, namely the i sets contained in sets in family i. Suppose that all sets are of size d. Abstracting the 1 shaddow is what Yury did and there we now know we cannot improve the upper bounds. We may try larger values of i between 1 and d.
25. Pingback: Emmanuel Abbe: Erdal Arıkan’s Polar Codes | Combinatorics and more
26. Pingback: Polynomial Hirsch Conjecture 5: Abstractions and Counterexamples. | Combinatorics and more
27. Gil Kalai says:
• ### Blogroll
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 272, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9341431260108948, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/algebra/107414-x-y-intercepts.html
|
# Thread:
1. ## x and y intercepts.
I need to find the x and y intercepts of this:
f(x) = x^3 - 4x
The y-intercept is just zero right? So how about the x's? I haven't done this in a while. Do you just factor this or do you do something else? I know it's easy.. but I'm having a major brain fade here. I'm guessing the x is on -4 and 4?
Edit: I think I got it. x-intercepts are on (-2,0) and (2,0)?
2. Originally Posted by nautica17
I need to find the x and y intercepts of this:
f(x) = x^3 - 4x
The y-intercept is just zero right? So how about the x's? I haven't done this in a while. Do you just factor this or do you do something else? I know it's easy.. but I'm having a major brain fade here. I'm guessing the x is on -4 and 4?
Edit: I think I got it. x-intercepts are on (-2,0) and (2,0)?
The y-intercept is given by f(0), so the y-intercept is zero. The x intercept is occurs when $y=0=x^3-4x$, so just solve this equation.
3. Originally Posted by nautica17
I need to find the x and y intercepts of this:
f(x) = x^3 - 4x
The y-intercept is just zero right? So how about the x's? I haven't done this in a while. Do you just factor this or do you do something else? I know it's easy.. but I'm having a major brain fade here. I'm guessing the x is on -4 and 4?
Edit: I think I got it. x-intercepts are on (-2,0) and (2,0)?
The x-intercepts are (x-2)(x+2) and 0.
4. Correction in red:
Originally Posted by Barthayn
The x-intercepts are the solutions to (x-2)(x+2)=0 and x = 0.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9696810841560364, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/110062?sort=oldest
|
## Start with a topological group, take the meet of the two uniformities, and take the topology. Is the result again a topological group? [xpost from math.SE]
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
And what else can be said, if so?
(Original math.SE post)
In more detail: Say $(G,\mathscr{T})$ is a topological group. It has a left uniformity $\mathscr{L}$ and a right uniformity $\mathscr{R}$. (It also has a two-sided uniformity $\mathscr{U}$, which is the join of the two.)
Now, uniformities on a given set form a complete lattice, so we can also consider the meet of the two, $\mathscr{V}$. However, the meet of two uniformities that yield the same topology does not necessarily again yield the same topology, so it's possible that $\mathscr{T}'$, the topology coming from $\mathscr{V}$, is coarser than our original topology $\mathscr{T}$.
(Obviously, this does not happen if the group is balanced, i.e. $\mathscr{L}=\mathscr{R}$; it also does not happen if $\mathscr{T}$ is locally compact, since the meet of two uniformities yielding the same locally compact topology does again yield the same topology. Actually, I don't know an actual case where this does happen, so I guess a first question I can ask is, are there any actual examples of this?)
So my question is, is $(G,\mathscr{T}')$ again a topological group? Obviously inversion is continuous, since $\mathscr{V}$ makes inversion uniformly continuous, but it's not clear what would happen with multiplication.
If it is a topological group, then we can ask things like, how does $\mathscr{V}$ compare to $\mathscr{L}'$, $\mathscr{R}'$, $\mathscr{U}'$, and $\mathscr{V'}$? (Well, obviously it's coarser than the last of these.) And considering $\mathscr{T} \mapsto \mathscr{T}'$ as an operation on group topologies on $G$, what happens when we iterate it? When we iterate it transfinitely?
-
What is a good example of a nonabelian, non-locallycompact, non-balanced group to try this out on? Linear transformations on some Banach space with some appropriate topology? – Gerald Edgar Oct 19 at 14:28
## 2 Answers
This is most definitely not my field of expertise (so be kind!), but Section 1.8 of the book "Topological Groups and Related Structures, an introduction to topological algebra" by Arhangel'skii and Tkachenko deals with these sorts of questions.
The book is available online at SpringerLink if you have access. Theorem 1.8.15 deals with something called the Roelcke uniformity, and if I'm reading the result correctly, proves that it is compatible with the topology on the group, and also the finest uniformity on the group coarser than the left and right uniformities.
I hope this is helpful! I can edit with better information after my colleague Vladimir Uspenskij gets out of class, as he's an expert on this.
Edit: I asked Uspenskij about this, and his quote was something like "In general, the meet of two uniformities is something horrible, but in topological groups we get the nice Roelcke uniformity."
-
1
Ooh, a whole book. Thanks a lot. – Harry Altman Oct 20 at 21:34
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The meet of the left- and right- uniformities is called the Roelcke uniformity, as Todd Eisworth mentions. The topology it generates is the original topology (the same is true for the join of the two uniformities). One way to see it is as follows: a fundamental system of entourages for the Roelcke uniformity is given by sets of the form $\{(x,y) \colon y \in VxV \}$, for $V$ a neighborood of the neutral element. If $U$ is an open neighborhood of some $g \in G$, then by joint continuity of the group operations there exists an open $V$ containing the neutral element and such that $VgV \subseteq U$, which shows that $U$ is open for the topology induced by the Roelcke uniformity. So the topology induced by the Roelcke uniformity is finer than the original one, and it is clearly coarser.
-
If someone knows how to format the sets properly (so that curly brackets appear), please edit my answer! – Julien Melleray Oct 19 at 20:40
To have curly brackets on MO, you sometimes have to put two backslash symbols before the `$\{$`. – Mikael de la Salle Oct 19 at 20:45
Thanks! I'll try to keep that in mind... – Julien Melleray Oct 19 at 21:00
Ordinarily, "\lbrace" and "\rbrace" should work. – Lubin Oct 20 at 13:54
Huh, that's much simpler than I anticipated. Thank you. – Harry Altman Oct 20 at 21:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9434406757354736, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?t=263335
|
Physics Forums
## Earth/Moon Gravity
1. Locate the position of a spaceship on the Earth-Moon center line such that, at that point, the tug of each celestial body exerted on it would cancel and the craft would literally be weightless. Please answer in meters from the Moon
2. The only thing I can think of is that G=6.67E^-11
3. I am not sure how to approach this problem
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Okay, you know this formula I'm sure: $$F = \frac{GMm}{r^2}$$ But unforunately the force depends on the masses of both bodies, so let's divide by m to find the acceleration of M. $$a = \frac{GM}{r^2}$$ For the distance (r) at which both have the same gravitational acceleration, you must make two equations equal each other. Be careful with your r, because you are measuring from the moon.
o.k the mass of earth is 5.97E24 kg and the moon is 7.36E22kg. First I am looking for acceleration. I am unclear when you suggested that I divide by m to find the acceleration of M.
## Earth/Moon Gravity
That was just showing you how I derived the acceleration due to gravity formula. You can ignore the first part of that post now.
Focus on this formula:
$$a = \frac{GM}{r^2}$$
This applies to all bodies. Therefore you can have the acceleration due to the earth's field:
$$a = \frac{GM_e}{r^2}$$ where r is measured from the centre of the earth.
and the acceleration of the moon is:
$$a = \frac{GM_m}{r^2}$$ where r is measured from the centre of the moon.
When these two equations equal each other, you have the point you are looking for.
However! The radii are measured from two different locations. You need to change the form of 'r' in one of the equations.
Thread Tools
| | | |
|-----------------------------------------|-------------------------------|---------|
| Similar Threads for: Earth/Moon Gravity | | |
| Thread | Forum | Replies |
| | Introductory Physics Homework | 21 |
| | General Astronomy | 3 |
| | Introductory Physics Homework | 8 |
| | Introductory Physics Homework | 4 |
| | General Physics | 7 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9276889562606812, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/158988/implicit-function-theorem-example-in-baby-rudin
|
# Implicit Function Theorem example in Baby Rudin
I am looking at example 2.29 of Baby Rudin (page 227) of my edition to illustrate the implicit function theorem. This is what the example is:
Take $n= 2$ and $m=3$ and consider $\mathbf{f} = (f_1,f_2)$ of $\Bbb{R}^5$ to $\Bbb{R}^2$ given by $$\begin{eqnarray*} f_1(x_1,x_2,y_1,y_2,y_3) &=& 2e^{x_1} + x_2y_1 -4y_2 + 3 \\ f_2(x_1,x_2,y_1,y_2,y_3) &=& x_2\cos x_1 - 6x_1 + 2y_1 - y_3 \end{eqnarray*}.$$ If $\mathbf{a} = (0,1)$ and $\mathbf{b} = (3,2,7)$, then $\mathbf{f(a,b)} = 0$. With respect to the standard bases, the derivative of $f$ at the point $(0,1,3,2,7)$ is the matrix $$[A] = \left[\begin{array}{ccccc} 2 & 3 & 1 & -4 & 0 \\ -6 & 1 & 2 & 0 & -1 \end{array}\right].$$ Hence if we observe the $2 \times 2$ block $$\left[\begin{array}{cc} 2 & 3 \\ -6 & 1 \end{array}\right]$$ it is invertible, and so by the implicit function theorem there exists a $C^1$ mapping $\mathbf{g}$ defined on a neighbourhood of $(3,2,7)$ such that $\mathbf{g}(3,2,7 ) = (0,1)$ and $\mathbf{f}(\mathbf{g}(\mathbf{y}),\mathbf{y}) = 0$.
Now what I don't understand is from such a $\mathbf{g}$, how does this mean that I can solve the variables $x_1$ and $x_2$ for $y_1,y_2,y_3$ locally about $(3,2,7)$?
Also if I wanted to carry out this computation explicitly, how can I do it? We do not have a nice and shiny linear system to solve unlike problem 19 of the same chapter.
Thanks.
-
– Dylan Moreland Jun 16 '12 at 8:08
@DylanMoreland Right Thanks. However even for the existence bit, how do I know from the existence of $g$ that I can solve $x_1$ and $x_2$ in terms of $y_1,y_2$ and $y_3$? – BenjaLim Jun 16 '12 at 8:09
Maybe I didn't understand your question. I guess I don't have a better answer than, "You put $\mathbf y$-values into $\mathbf g$ and it spits $\mathbf x$-values out." It seems to me, sadly, that Rudin's example is more about checking that the hypotheses are satisfied. – Dylan Moreland Jun 16 '12 at 8:16
@DylanMoreland It's ok then, because the exercise at the end of the chapter does allow us to do an explicit computation. – BenjaLim Jun 16 '12 at 8:19
Interesting. Which exercise, specifically? – Dylan Moreland Jun 16 '12 at 8:23
show 3 more comments
## 2 Answers
As suggested by several comments the implicit function theorem as well as the inverse function theorem (which is equivalent to the implicit function theorem) are (powerful) existence theorems which in fact provide little help when it comes to explicitly computing an inverse or implicit function. This is, however, usually not necessary in applications in analysis.
What is extremely important in the two theorems (apart from the existence claim) are the assertions of the uniqueness/invertibility of the solution in a whole neighbourhood (having topological consequences, e.g., the inverse functions theorem shows that $C^1$ maps with invertible derivative in one point are locally open) and regularity of the functions the existence of which is guaranteed (i.e. $f(x,y)\in C^1 (C^k)$ and the assumptions of the theorem are fulfilled $\Rightarrow g\in C^1 (C^k)$ if $f(x, g(x))=0$).
As most people have difficulties to grasp the implicit function theorem when they first get to see it I think Rudin's intention was to illustrate the steps which are necessary when one wants to apply it.
-
With regards to your first question see this and this. Also Rudin's Principles of Mathematical Analysis theorem 9.27 covers this (see equation (59)).
With regards to your second question I do not think the implicit function theorem gives you an explicit way to solve the system. Consider the function $y = xe^x$. Even trying to solve this in terms of $y$ requires the Lambert W function.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9367396235466003, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/37792/a-possible-generalization-of-the-homotopy-groups/37793
|
## A possible generalization of the homotopy groups.
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The homotopy groups $\pi_{n}(X)$ arise from considering equivalence classes of based maps from the $n$-sphere $S^{n}$ to the space $X$. As is well known, these maps can be composed, giving arise to a group operation. The resulting group contains a great deal of information about the given space. My question is: is there any extra information about a space that can be discovered by considering equivalence classes of based maps from the $n$-tori $T^{n}=S^{1}\times S^{1}\times \cdots \times S^{1}$. In the case of $T^{2}$, it would seem that since any path $S^{1}\to X$ can be "thickened" to create a path $T^{2}\to X$ if $X$ is three-dimensional, the group arising from based paths $T^{2}\to X$ would contain $\pi_{1}(X)$. Perhaps more generally, can useful information be gained by examining equivalence classes of based maps from some arbitrary space $Y$ to a given space $X$.
-
12
Well, if you examine homotopy classes of based maps from all based spaces Y at once, then you get enough information to characterize the space X up to homotopy equivalence, by the Yoneda lemma in the homotopy category. (-: – Mike Shulman Sep 5 2010 at 17:25
There's something similar to this that I thought of once: given any two spaces $X$ and $Y$, the set of homotopy classes of maps $X \times I \to Y$ sending all $(x, 0)$ and $(0,x)$ to a fixed base point form a group. If $X = I^{n-1}$, I believe the result contains, at least in some cases, all the homotopy groups $\pi_1$ through $\pi_n$. But given how hard the latter are to compute, I doubt that this construction is all that useful. [But if it is, I am not in a position to know.] – Charles Staats Sep 5 2010 at 20:39
Mike- isn't this true even if we restricted to based maps for all based <i>co-Moore</i> spaces at once to X? Or am I completely off-base? – Daniel Moskovich Sep 5 2010 at 21:53
arxiv.org/abs/math/9904026 Groups of Flagged Homotopies and Higher Gauge Theory Valery V.Dolotin - there is some generalization of the homotopy groups. – Alexander Chervov Feb 16 at 14:33
## 4 Answers
There's always information to be got. But in this case:
• Based homotopy classes of maps $T^2\to X$ don't form a group! To define a natural function `$\mu\colon [T,X]_*\times [T,X]_*\to [T,X]_*$`, you need a map $c\colon T\to T\vee T$ (where $\vee$ is one point union). And if you want $\mu$ to be unital, associative, etc., you'll want $c$ to be counital, coassociative, etc. For $T=T^n$ with $n\geq2$, there is no $c$ that is counital. (The usual way to see this is to think about the cohomology $H^*T$ with its cup-product structure.)
• The inclusion $S^1\vee S^1\to T^2$ gives a map $$r\colon [T^2,X]_* \to [S^1\vee S^1,X]_*\approx \pi_1X\times \pi_1X.$$ The image of this map will be pairs $(a,b)$ of elements in $\pi_1X$ which commute: $ab=ba$. It won't usually be injective; so there might be something interesting to think about the in preimages $r^{-1}(a,b)$.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Back in the 1940's, Ralph Fox defined something called the torus homotopy group. For a based space $(Y,y_0)$ and natural number $r$, the $r$-dimensional torus homotopy group $\tau_r(Y,y_0)$ is just the fundamental group of the mapping space ${\rm map}(T^{r-1},Y)$, based at the constant map (where $T^{r-1}$ is of course a torus).
The group $\tau_r(Y,y_0)$ contains isomorphic copies of $\pi_n(Y,y_0)$ for all $n\leq r$. Also, Whitehead products become commutators in the torus homotopy group. By passing to the limit over $r$ one obtains the (infinite) torus homotopy group $\tau(Y, y_0)$, which contains all of the homotopy information of $Y$ in one place!
Unfortunately for Fox, the idea doesn't seem to have caught on (although I hear he had a few others which did). MathSciNet only turns up 11 papers containing the phrase "torus homotopy groups" (although the most recent is from 2007).
-
Your problem is that $T^n$ is not in general a co-Moore space. Therefore Eckmann-Hilton duality breaks down, as the dual spaces no longer form a spectrum, and there would be no (co)homology theory dual to such a "homotopy theory". Thus, a theory of homotopy classes of pointed maps from $T^n$ to $X$ would be much less interesting than a theory of homotopy classes of pointed maps from $S^n$ to $X$.
On the other hand, the study of homotopy classes of pointed maps from a co-Moore space other than $S^n$ to $X$ does lead to useful theories of homotopy with coefficients. I believe these classify $X$ up to homotopy equivalence.
-
I was told by Brian Griffiths that Fox was hoping to obtain a generalisation of the van Kampen theorem and so continue work of J.H.C Whitehead on adding relations to homotopy groups (see his 1941 paper with that title).
However if one frees oneself from the base point fixation one might be led to consider Loday's cat$^n$-group of a based $(n+1)$-ad, $X_*=(X;X_1, \ldots, X_n)$; let $\Phi X_*$ be the space of maps $I^n \to X$ which take the faces of the $n$-cube $I^n$ in direction $i$ into $X_i$ and the vertices to the base point. Then $\Phi$ has compositions $+_i$ in direction $i$ which form a lax $n$-fold groupoid. However the group $\Pi X_*= \pi_1(\Phi, x)$, where $x$ is the constant map at the base point $x$, inherits these compositions to become a cat$^n$-group, i.e. a strict $n$-fold groupoid internal to the category of groups (the proof is non trivial).
There is a Higher Homotopy van Kampen Theorem for this functor $\Pi$ which enables some new nonabelian calculations in homotopy theory (see our paper in Topology 26 (1987) 311-334).
So a key step is to move from spaces with base point to certain structured spaces.
Comment Feb 16, 2013: The workers in algebraic topology near the beginning of the 20th century were looking for higher dimensional versions of the fundamental group, since they knew that the nonabelian fundamental group was useful in problems of analysis and geometry. In 1932, Cech submitted a paper on Higher Homotopy Groups to the ICM at Zurich, but Alexandroff and Hopf quickly proved the groups were abelian for $n >1$ and on these grounds persuaded Cech to withdraw his paper, so that only a small paragraph appeared in the Proceedings. It is reported that Hurewicz attended that conference. In due course, the idea of higher versions of the fundamental group came to be seen as a mirage.
One explanation of the abelian nature of the higher homotopy groups is that group objects in the category of groups are abelian groups, as a result of the interchange law, also called the Eckmann-Hilton argument. However group objects in the category of groupoids are equivalent to crossed modules, and so are in some sense "more nonabelian" than groups. Crossed modules were first defined by J.H.C. Whitehead, 1946, in relation to second relative homotopy groups. This leads to the possibility, now realised, of "higher homotopy groupoids", Higher Homotopy Seifert-van Kampen Theorems, and the notions of higher dimensional group theory.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 76, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9404417276382446, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?s=9dab01a20e3d35f91d52f5b1b60a3489&p=3806830
|
Physics Forums
Blog Entries: 1
Surface integral problem from H.M. Schey's book
I've been fooling around by myself with the book "div, grad, curl and all that" by H.M. Schey to learn some vector calculus. However, in the second chapter, when he performs the integrals, he skips the part where he finds the limits on x and y. Here's an example:
Compute the surface integral $$\int \int_{S} (x+y)dS$$
where S is the portion of the plane x+y+z=1 in the first octant.
yada yada yada some rewriting etc.
The final integral is $$\sqrt{3}\int \int_{R} (1-y)dxdy$$, R being the area of S projected onto the xy-plane. He then says "this is a simple double integral with value 1/√3, as you should be able to verify.
What I take from this:
"In the first octant" means the first quadrant in the xy-plane, octant being used because of 3-dimensional space.
Finding the limits on x and y, I set z=y=0 to find x, and z=x=0 to find y, both limits being 0 to 1 from the x+y+z=1 equation. But I don't get the same value for the integral as the solution says. Where does it go wrong?
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Blog Entries: 27
Recognitions:
Gold Member
Homework Help
Science Advisor
Hi Hixy! Welcome to PF!
Quote by Hixy Finding the limits on x and y, I set z=y=0 to find x, and z=x=0 to find y, both limits being 0 to 1 from the x+y+z=1 equation.
No, you can't do that.
If your first limit is x (which is from 0 to 1), then your y limits will depend on x …
for example, 0 to x or 0 to 1-x or … ?
(or you can do it t'other way round, with y going from 0 to 1, and then the x limits depending on y)
Recognitions: Gold Member Science Advisor Staff Emeritus $\int_{x=0}^1\int_{y=0}^1 dydx$ would be an integral over the rectangle with boundaries the lines x=0, x= 1, y= 0, y= 1. To do an integral like this, you must first decide the order in which you will be integrating. If you decide to integrate with respect to y first, then x, since you want a number, rather than a function of x, as your answer, you must integrate from x= 0 to x= 1. But then, for each x, y varies for 0 up to the line x+y= 1. Also, because you are integrating over a surface other than the xy-plane, you need to use "dS" for that surface, not just "dydx". There are a number of ways to do that but my favorite is this: if the surface is given by z= f(x,y), we can write any point on the surface as $\vec{r}= x\vec{i}+ y\vec{j}+ z\vec{k}= x\vec{i}+ y\vec{j}+ f(x,y)\vec{k}$. The derivative of that vector with respect to y, $\vec{r}_y= \vec{j}+ f_y\vec{k}$, is a vector tangent to the surface and whose length gives the rate of change of distances on the surface as y changes. The derivative of that vector with respect to x, $\vec{r}_x= \vec{i}+ f_x\vec{k}$, is a vector tangent to the surface and whose length gives the rate of change of distances on the surface as x changes. The cross product of those two vectors, the "fundamental vector product" for the surface, $\vec{r}_y\times\vec{r}_x= f_x\vec{i}+ f_y\vec{j}- \vec{k}$ is perpendicular to the surface and its length gives the rate of change of area as both x and y change:$\sqrt{f_x^2+ f_y^2+ 1}$. The "differential of surface area" is $\sqrt{f_x^2+ f_y^2+ 1}dydx$. Here, $z= f(x,y)= 1- x- y$ so that $\vec{r}_y\times\vec{r}_x= -\vec{i}- \vec{j}- \vec{k}$ and so $dS= \sqrt{3}dydx$.
Blog Entries: 1
Surface integral problem from H.M. Schey's book
Got it .. ;) Seems so trivial now that I understand. Of course, for y, x has to be a variable to cover the whole area. Then, since y has been taken care of, x can just go from 0 to 1. I get the correct answer.
Thanks for taking the time to write such an elaborate answer, HallsofIvy. That was really helpful. Good point with that way to rewrite dS in terms of dy and dx.
Thanks to both of you!
Thread Tools
| | | |
|----------------------------------------------------------------------|-------------------------------|---------|
| Similar Threads for: Surface integral problem from H.M. Schey's book | | |
| Thread | Forum | Replies |
| | Calculus & Beyond Homework | 2 |
| | Calculus & Beyond Homework | 1 |
| | Calculus & Beyond Homework | 4 |
| | Calculus & Beyond Homework | 2 |
| | Introductory Physics Homework | 3 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9279974102973938, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-statistics/27027-exponential-random-variables.html
|
# Thread:
1. ## Exponential Random Variables
Let's say we have $n$ exponential random variables $X_1,X_2, \ldots, X_n$. Now consider the following: $X_1 + X_2 + \ldots + X_n$.
What type of distribution would the sum of $n$ exponential random variables, each with mean $\lambda_i$ have?
2. Originally Posted by shilz222
Let's say we have $n$ exponential random variables $X_1,X_2, \ldots, X_n$. Now consider the following: $X_1 + X_2 + \ldots + X_n$.
What type of distribution would the sum of $n$ exponential random variables, each with mean $\lambda_i$ have?
Are the random variables continuous or discrete?
For starters, you might want to read through this. The continuous stuff is in 7.2. This might also shed light on my reply to your hyperexponential question (if more light is needed .....)
So work it for $Z_1 = X_1 + X_2$. Then use that result to get $Z_2 = Z_1 + X_3$. Then use that result to get $Z_3 = Z_2 + X_4$. etc. For a result when the random variables are discrete, read through this.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8947026133537292, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/28610/list
|
## Return to Question
3 added 692 characters in body
Suppose we have a (n-1 dimensional) Unit Sphere centered at the origin: $$\sum_{i=1}^{n}{x_i}^2 = 1$$ What is the probability that a randomly selected point on the sphere, $(x_1,x_2,x_3,...,x_n)$, has coordinates such that $$\forall i, |x_i| \leq d$$ for some $d \in [0,1]$?
This is equivalent to finding the intersection of the $(n-1)$-hypersphere with the $n$-hypercube of side $2d$ centered at origin, and then taking the ratio of that $(n-1)$-volume over the $(n-1)$-volume of the $(n-1)$-hypersphere.
As there are closed-form formulas for the volume of a hypersphere, the problem reduces to finding the $(n-1)$-volume of the aforementioned intersection.
All attempts I've made to solve this intersection problem have led me to a series of nested integrals, where one or both limits of each integral depend on the coordinate outside that integral, and I know of no way to evaluate it. For example, using hyperspherical coordinates, I have obtained the following integral: $$2^n n! \int_{\phi_{n-1}=tan^{-1}\frac{\sqrt{1-(n-1)d^2}}{d}}^{tan^{-1}1} \int_{\phi_{n-2}=tan^{-1}\frac{\sqrt{1-(n-2)d^2}}{d}}^{tan^{-1}\frac{1}{cos\phi_{n-1}}}\ldots\int_{\phi_1=tan^{-1}\frac{\sqrt{1-d^2}}{d}}^{tan^{-1}\frac{1}{cos\phi_2}} d_{S^{n-1}}V$$ where $$d_{S^{n-1}}V = \sin^{n-2}(\phi_1)\sin^{n-3}(\phi_2)\cdots \sin(\phi_{n-2})\ d\phi_1 \ d\phi_2\ldots d\phi_{n-1}$$ is the volume element of the $(n-1)$–sphere. But this is pretty useless as I can see no way of integrating this accurately for high dimensions (in the thousands, say).
Using cartesian coordinates, the problem can be restated as evaluating: $$\int_{\sum_{i=1}^{n-1}{x_i}^2\leq1, |x_i| \leq d} \frac{1}{\sqrt{1-\sum_{i=1}^{n-1}{x_i}^2}}dx_1 dx_2 \ldots dx_{n-1}$$ which, as far as I know, is un-integrable.
I would greatly appreciate any attempt at estimating this probability (giving an upper bound, say) and how it depends on $n$ and $d$. Or, given a particular probability and fixed $d$, to find $n$ which satisfies that probability.
Edit: This question leads to two questions that are slightly more general:
1) I think part of the difficulty is that neither spherical nor cartesian coordinates work very well for this problem, because we're trying to find the intersection between a region that is best expressed in spherical coordinates (the sphere) and another that is best expressed in cartesian coordinates (the cube). Are there other problems that are similar to this? And how are their solutions usually formulated?
2) Also, the problem with the integral is that the limits of each of the nested integrals is a function of the "outer" variable. Is there any general method of solving these kinds of integrals?
2 added 2 characters in body
Suppose we have a (n-1 dimensional) Unit Sphere centered at the origin: $$\sum_{i=1}^{n}{x_i}^2 = 1$$ What is the probability that a randomly selected point on the sphere, $(x_1,x_2,x_3,...,x_n)$, has coordinates such that $$\forall i, |x_i| \leq d$$ for some $d \in [0,1]$?
This is equivalent to finding the intersection of the $(n-1)$-hypersphere with the $n$-hypercube of side $2d$ centered at origin, and then taking the ratio of that $(n-1)$-volume over the $(n-1)$-volume of the $(n-1)$-hypersphere.
As there are closed-form formulas for the volume of a hypersphere, the problem reduces to finding the $(n-1)$-volume of the aforementioned intersection.
All attempts I've made to solve this intersection problem have led me to a series of nested integrals, where one or both limits of each integral depend on the coordinate outside that integral, and I know of no way to evaluate it. For example, using hyperspherical coordinates, I have obtained the following integral: $$2^n n! \int_{\phi_{n-1}=tan^{-1}\frac{\sqrt{1-(n-1)d^2}}{d}}^{tan^{-1}1} \int_{\phi_{n-2}=tan^{-1}\frac{\sqrt{1-(n-2)d^2}}{d}}^{tan^{-1}\frac{1}{cos\phi_{n-1}}}\ldots\int_{\phi_1=tan^{-1}\frac{\sqrt{1-d^2}}{d}}^{tan^{-1}\frac{1}{cos\phi_2}} d_{S^{n-1}}V$$ where $$d_{S^{n-1}}V = \sin^{n-2}(\phi_1)\sin^{n-3}(\phi_2)\cdots \sin(\phi_{n-2})\ d\phi_1 \ d\phi_2\ldots d\phi_{n-1}$$ is the volume element of the $(n-1)$–sphere. But this is pretty useless as I can see no way of integrating this accurately for high dimensions (in the thousands, say).
Using spatial cartesian coordinates, the problem can be restated as evaluating: $$\int_{\sum_{i=1}^{n-1}{x_i}^2\leq1, |x_i| \leq d} \frac{1}{\sqrt{1-\sum_{i=1}^{n-1}{x_i}^2}}dx_1 dx_2 \ldots dx_{n-1}$$ which, as far as I know, is un-integrable.
I would greatly appreciate any attempt at estimating this probability (giving an upper bound, say) and how it depends on $n$ and $d$. Or, given a particular probability and fixed $d$, to find $n$ which satisfies that probability.
1
# Probability of a Point on a Unit Sphere lying within a Cube
Suppose we have a (n-1 dimensional) Unit Sphere centered at the origin: $$\sum_{i=1}^{n}{x_i}^2 = 1$$ What is the probability that a randomly selected point on the sphere, $(x_1,x_2,x_3,...,x_n)$, has coordinates such that $$\forall i, |x_i| \leq d$$ for some $d \in [0,1]$?
This is equivalent to finding the intersection of the $(n-1)$-hypersphere with the $n$-hypercube of side $2d$ centered at origin, and then taking the ratio of that $(n-1)$-volume over the $(n-1)$-volume of the $(n-1)$-hypersphere.
As there are closed-form formulas for the volume of a hypersphere, the problem reduces to finding the $(n-1)$-volume of the aforementioned intersection.
All attempts I've made to solve this intersection problem have led me to a series of nested integrals, where one or both limits of each integral depend on the coordinate outside that integral, and I know of no way to evaluate it. For example, using hyperspherical coordinates, I have obtained the following integral: $$2^n n! \int_{\phi_{n-1}=tan^{-1}\frac{\sqrt{1-(n-1)d^2}}{d}}^{tan^{-1}1} \int_{\phi_{n-2}=tan^{-1}\frac{\sqrt{1-(n-2)d^2}}{d}}^{tan^{-1}\frac{1}{cos\phi_{n-1}}}\ldots\int_{\phi_1=tan^{-1}\frac{\sqrt{1-d^2}}{d}}^{tan^{-1}\frac{1}{cos\phi_2}} d_{S^{n-1}}V$$ where $$d_{S^{n-1}}V = \sin^{n-2}(\phi_1)\sin^{n-3}(\phi_2)\cdots \sin(\phi_{n-2})\ d\phi_1 \ d\phi_2\ldots d\phi_{n-1}$$ is the volume element of the $(n-1)$–sphere. But this is pretty useless as I can see no way of integrating this accurately for high dimensions (in the thousands, say).
Using spatial coordinates, the problem can be restated as evaluating: $$\int_{\sum_{i=1}^{n-1}{x_i}^2\leq1, |x_i| \leq d} \frac{1}{\sqrt{1-\sum_{i=1}^{n-1}{x_i}^2}}dx_1 dx_2 \ldots dx_{n-1}$$ which, as far as I know, is un-integrable.
I would greatly appreciate any attempt at estimating this probability (giving an upper bound, say) and how it depends on $n$ and $d$. Or, given a particular probability and fixed $d$, to find $n$ which satisfies that probability.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 15, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9460801482200623, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/122891/small-index-subgroups-of-sl3-z/123108
|
## Small index subgroups of SL(3,Z)
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I would like to know the smallest index subgroups of SL(3,Z).
The smallest I could find has even entries $a_{3,1}$ and $a_{3,2}$, along the bottom row. I could not figure out whether there are subgroups of index 2 or 3.
A search found lots of information about SL(2,Z) but not SL(3,Z).
-
4
Since $SL(3,\mathbb{Z})$ has the congruence subgroup property, I believe that every finite-index subgroup is pulled back from some subgroup of $SL(3,\mathbb{Z}/n)$ for some $n$. So your question can be reduced to the same question over $\mathbb{Z}/n$ (for all $n$). – HW Feb 25 at 15:03
1
It seems to me that $SL(3,{\mathbb Z})$ is its own commutator. Hence it cannot have a subgroup of index $2$ or $3$. – Aakumadula Feb 25 at 15:59
5
I would guess that the smallest index is $Card({\mathbb P}^2 ({\mathbb F}_2)=7$. – Aakumadula Feb 25 at 16:06
@David: Concerning `$\mathrm{SL}(2,\mathbb{Z})$`, or its quotient by scalars the modular group, the congruence subgroup property fails and the subgroup structure is much richer than in higher ranks. But the modular group has a lot of older literature relative to its action on the complex upper half plane, etc. – Jim Humphreys Feb 26 at 14:06
## 3 Answers
To fill in the comments, there are basically two serious issues involved.
1) You want to know that every subgroup of finite index in `$\mathrm{SL}(3,\mathbb{Z})$` contains some congruence kernel: the kernel of the natural reduction homomorphism induced by `$\mathbb{Z} \rightarrow \mathbb{Z}/n\mathbb{Z}$`. In other words, the original group satisfies the congruence subgroup property (as HW comments). This is already quite nontrivial to prove and evolved from work of Bass-Lazard-Serre here followed by more definitive work by Bass-Milnor-Serre here. (There is some exposition in the last part of my 1980 Springer Lecture Notes 789.)
2) Now you have to pin down the maximal subgroups of a typical finite group `$\mathrm{SL}(3,n)$` and pull these back to the big group in order to sort out which give you subgroups of minimal index there. Maximal subgroups of the finite groups in question have been extensively studied, especially over finite fields, but in this case some trial-and-error leads pretty quickly to the number 7, as indicated by Aakumadula. This results from the first possibility `$n=2$`, where you get the simple nonabelian group `$\mathrm{SL}(3,2)$` of order `$ 8 \cdot 3 \cdot 7$`, from the general formula for a prime `$p$` given by `$p^3 (p^2-1)(p^3-1)$`. Here there is a maximal parabolic subgroup of index 7.
The search for maximal subgroups is itself a major project, for which I don't have all relevant references at hand. But over a field some basic insight comes from algebraic groups, where you see that a maximal (closed) proper subgroup will be either parabolic or reductive. I'm not sure in the special case at hand how hard it is to pin down the maximal subgroups of the various finite groups directly.
ADDED: As pointed out in comments, it's possible here to rule out subgroups of index 2 or 3 by combining the known commutator group result with a study of actions of the big group on left cosets of a hypothetical subgroup. But a systematic study (especially for other Chevalley-type groups) probably requires something like the program I've sketched above. Given the congruence subgroup property, the study of subgroups of finite index then reduces quickly to the study of finite groups of the same type over finite rings and then prime fields. In your case, this helps to provide a conceptual answer to such questions as: Why does `$\mathrm{SL}(3, \mathbb{Z})$` have a subgroup of index 7?
There is a vast literature by now on maximal subgroups of known finite simple groups, by people including Aschbacher, Kleidman, Liebeck, Scott, Seitz, .... An older brief survey by Kleidman-Liebeck, with an impressive bibliography, appeared in Geom. Dedicata 25 (1988). For your specific group there are very old results, but the general approach is more unified even though intricate.
-
5
Given an index $k$ subgroup of $SL(3,Z)$, $k\leq 6$, one obtains a homomorphism to $A_k$ from permuting cosets. By the congruence subgroup property, the image must be congruence, and therefore contains the simple $PSL(3,p)$ as a quotient for some $p$. But we see that no such simple group divides $360$ from your formula. – Agol Feb 25 at 19:24
A step I am missing is why the subgroup of $SL(3,Z)$ has to be the pullback of a subgroup of $SL(3,n)$. Is maximality being used? If $G$ is the subgroup, I get that there exists $H<G$ so that $H$ is the matrices congruent to the identity mod $n$, for some integer $n$. Probably I am missing something easy for the next step. – David Farmer Feb 25 at 20:55
3
Your $H$ is normal in $SL(3,Z)$ so the only ingredient not explicitly mentioned is the undergraduate-level fact that subgroups of $G$ containing the normal subgroup $N$ biject canonically with subgroups of $G/N$ via the pullback map. – wccanard Feb 25 at 21:16
1
If a group has a subgroup of index $4$ then it also has either a subgroup of index $2$ or $3$. – Tom Goodwillie Feb 26 at 15:45
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
In order to answer the question we need a finite presentation of ${\rm SL}(3,\mathbb{Z})$ and a general method to find all subgroups of index $\leq n$ of a finitely presented group:
• A finite presentation for ${\rm SL}(3,\mathbb{Z})$ can be found for example in Theorem 2 in
Marston Conder, Edmund Robertson, Peter Williams: Presentations for 3-dimensional special linear groups over integer rings, Proc. Amer. Math. Soc. 115 (1992), no. 1, 19-26. http://www.ams.org/journals/proc/1992-115-01/S0002-9939-1992-1079696-5/S0002-9939-1992-1079696-5.pdf.
The finite presentation given in this paper is $${\rm SL}(3,\mathbb{Z}) \cong \left< x, y, z \ | \ x^3 = y^3 = z^2 = (xz)^3 = (yz)^3 = (x^{-1}zxy)^2 = (y^{-1}zyx)^2 = (xy)^6 = 1 \right>$$ on the generators $$x \ = \ \left( \begin{array}{rrr} 1 & 0 & 1 \\ 0 & -1 & -1 \\ 0 & 1 & 0 \end{array} \right), \ \ y \ = \ \left( \begin{array}{rrr} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 0 & 0 \end{array} \right), \ \ z \ = \ \left( \begin{array}{rrr} 0 & 1 & 0 \\ 1 & 0 & 0 \\ -1 & -1 & -1 \end{array} \right).$$
• A general method to find all subgroups of index $\leq n$ of a finitely presented group is the so-called low index subgroups procedure. This algorithm is described in Section 5.4 in
Derek F. Holt, Bettina Eick, and Eamonn A. O'Brien, Handbook of computational group theory, Discrete Mathematics and its Applications (Boca Raton), Chapman & Hall / CRC, Boca Raton, FL, 2005. MR 2129747 (2006f:20001).
For an online resource, see e.g.
Marston Conder: Applications and adaptations of the low index subgroups procedure, http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.107.5164.
The low index subgroups procedure is implemented in the GAP computer algebra system (cf. http://www.gap-system.org). Hence all we need to do is to enter the presentation of ${\rm SL}(3,\mathbb{Z})$ taken from the above paper into GAP ...
````gap> F := FreeGroup("x","y","z");;
gap> AssignGeneratorVariables(F);
#I Assigned the global variables [ x, y, z ]
gap> G := F/[x^3,y^3,z^2,(x*z)^3,(y*z)^3,(x^-1*z*x*y)^2,(y^-1*z*y*x)^2,(x*y)^6];
<fp group on the generators [ x, y, z ]>
````
... and to run the algorithm on it:
````gap> sub := LowIndexSubgroupsFpGroup(G,7);;
gap> List(sub,H->Index(G,H));
[ 1, 7, 7 ]
gap> gens := List(sub,GeneratorsOfGroup);
[ [ x, y, z ], [ x, z, y*z*y^-1, (y*x)^2*y ],
[ x, y*x^-1*z^-1, y^-1*z*y, z*y^-1*x*y, y^-1*x*y*x^-1*y ] ]
````
This tells us that the smallest index of a proper subgroup of ${\rm SL}(3,\mathbb{Z})$ is 7, and that there are 2 conjugacy classes of subgroups of index 7. Now it is straightforward to obtain generators for our subgroups in terms of matrices:
````gap> x := [ [ 1, 0, 1 ], [ 0, -1, -1 ], [ 0, 1, 0 ] ];;
gap> y := [ [ 0, 1, 0 ], [ 0, 0, 1 ], [ 1, 0, 0 ] ];;
gap> z := [ [ 0, 1, 0 ], [ 1, 0, 0 ], [ -1, -1, -1 ] ];;
gap> List(gens[2],g->MappedWord(g,GeneratorsOfGroup(G),[x,y,z]));
[ [ [ 1, 0, 1 ], [ 0, -1, -1 ], [ 0, 1, 0 ] ],
[ [ 0, 1, 0 ], [ 1, 0, 0 ], [ -1, -1, -1 ] ],
[ [ 0, 0, 1 ], [ -1, -1, -1 ], [ 1, 0, 0 ] ],
[ [ -1, -1, -1 ], [ 0, 0, 1 ], [ 0, 1, -1 ] ] ]
gap> List(gens[3],g->MappedWord(g,GeneratorsOfGroup(G),[x,y,z]));
[ [ [ 1, 0, 1 ], [ 0, -1, -1 ], [ 0, 1, 0 ] ],
[ [ -1, -1, -1 ], [ 0, 1, 1 ], [ 0, 0, -1 ] ],
[ [ -1, -1, -1 ], [ 0, 0, 1 ], [ 0, 1, 0 ] ],
[ [ 1, 1, 0 ], [ 0, 0, 1 ], [ 0, -1, 0 ] ],
[ [ -1, 0, -1 ], [ 2, 1, 1 ], [ 0, -1, 0 ] ] ]
````
So our representatives for the conjugacy classes of subgroups of ${\rm SL}(3,\mathbb{Z})$ of index 7 are: $$G_{7,1} \ = \ \left< \left(\begin{array}{rrr} 1&0&1\\ 0&-1&-1\\ 0&1&0 \end{array}\right), \ \left(\begin{array}{rrr}% 0&1&0\\ 1&0&0\\ -1&-1&-1 \end{array}\right), \ \left(\begin{array}{rrr}% 0&0&1\\ -1&-1&-1\\ 1&0&0 \end{array}\right), \ \left(\begin{array}{rrr}% -1&-1&-1\\ 0&0&1\\ 0&1&-1 \end{array}\right) \right>$$ and $$G_{7,2} \ = \ \left< \left(\begin{array}{rrr}% 1&0&1\\ 0&-1&-1\\ 0&1&0 \end{array}\right), \ \left(\begin{array}{rrr}% -1&-1&-1\\ 0&1&1\\ 0&0&-1 \end{array}\right), \ \left(\begin{array}{rrr}% -1&-1&-1\\ 0&0&1\\ 0&1&0 \end{array}\right), \ \left(\begin{array}{rrr}% 1&1&0\\ 0&0&1\\ 0&-1&0 \end{array}\right), \ \left(\begin{array}{rrr}% -1&0&-1\\ 2&1&1\\ 0&-1&0 \end{array}\right) \right>.$$ The computations above take just a few milliseconds. If one is willing to put in a minute or so, then one can go a bit further and compute representatives for the conjugacy classes of subgroups of ${\rm SL}(3,\mathbb{Z})$ of index $\leq 30$:
````gap> sub := LowIndexSubgroupsFpGroup(G,30);;
gap> List(sub,H->Index(G,H));
[ 1, 8, 7, 28, 14, 13, 7, 13, 28, 26, 14, 26, 28, 28, 28, 24, 21 ]
````
So we have subgroups of indices 7, 8, 13, 14, 21, 24, 26 and 28, and there are no proper subgroups of other indices $\leq 30$. Generators of the subgroups in terms of our generators $x, y, z$ of ${\rm SL}(3,\mathbb{Z})$ can be determined easily as well:
````gap> List(sub,GeneratorsOfGroup);
[ [ x, y, z ], [ x, y ], [ x, z, y*z*y^-1, (y*x)^2*y ],
[ x, z, y*z*y^-1, y*x*(y*x^-1)^2*y, (y*x)^2*(y^-1*x)^2*y^-1 ],
[ x, z, (y*x)^2*y ], [ x, y*x^-1*z^-1, y^-1*z*y, z*y^-1*x^-1*y ],
[ x, y*x^-1*z^-1, y^-1*z*y, z*y^-1*x*y, y^-1*x*y*x^-1*y ],
[ x, y*x^-1*z^-1, y^-1*z*y, z*y^-1*x*y, y^-1*(x*y)^2*x^-1*y^-1*x^-1*y ],
[ x, y*x^-1*z^-1, y^-1*z*y, z*y^-1*x*y,
y^-1*(x*y)^2*x^-1*y^-1*x*y*x*y^-1*x^-1*y,
y^-1*(x*y)^2*x^-1*y*x*y^-1*x*y*x^-1*y^-1*x^-1*y ],
[ x, y*x^-1*z^-1, z*y^-1*x^-1*y ],
[ x, y*z*x^-1*y^-1, z*y*x*z^-1, (y*x)^2*y, y^-1*x*y*x^-1*y ],
[ x, y*z*x^-1*y^-1, z*y*x*z^-1, z*y^-1*x*y, y^-1*(x*y)^2*x^-1*y^-1*x^-1*y ],
[ x, y^-1*z*y, (y*x)^2*y, y^-1*x*y*x^-1*y ],
[ x, y^-1*z*y, (y*x)^2*y, z*y^-1*(x^-1*y)^2 ],
[ x, y^-1*z*y, y*x^-1*z*y^-1*x^-1*y^-1, y^-1*x*y*z^-1*x^-1*y^-1 ],
[ y*x^-1, y^-1*x ], [ z, x*z*x^-1, y*z*y^-1, (y*x)^3 ] ]
````
-
1
Great answer. I wish I could "accept" two answers. – David Farmer Feb 28 at 18:59
You don't need the congruence subgroup property to see that this group has no proper subgroups of index less than $7$, in other words that it has no nontrivial homomorphism to $S_6$.
Nor do you need a presentation of the group; all you need is that it is generated by six elements satisfying certain relations, and a little patience.
Let $x_{i,j}$ be the matrix with $1$ on the diagonal, $1$ in the $(i,j)$, spot, and $0$ otherwise. It is not hard to see, by row reduction, that $SL(3,\mathbb Z)$ is generated by these elements. When $i$, $j$, and $k$ are distinct then $x_{i,j}$ is the commutator of $x_{i,k}$ and $x_{k,j}$, and also $x_{i,j}$ commutes with $x_{i,k}$ and with $x_{k,j}$.
Now suppose for contradiction that $S_6$ has elements $x_{i,j}$ satisfying these same relations and not all equal to the identity. No $x_{i,j}$ can be the identity because then by taking commutators they all would be the identity. Being a commutator, $x_{i,j}$ must belong to $A_6$. It is the commutator of $x_{i,k}$ and $x_{k,j}$, two elements of $A_6$ that commute with it. Looking at all cases, you see that the only nontrivial elements of $A_6$ whose centralizers are nonabelian are those of the form $(ab)(cd)$, product of two disjoint $2$-cycles. If $x_{i,j}=(ab)(cd)$ then $x_{i,k}$ and $x_{k,j}$ must be elements of its centralizer and again of the same type. But the only such elements are $(ab)(cd)$, $(ac)(bd)$, $(ad)(bc)$, $(ab)(ef)$, and $(cd)(ef)$. There are only a few possibilities for pairs of these whose commutator is $(ab)(cd)$. You'll find that none of them will lead to a solution, a choice of $x_{i,j}$ for all $i$ and $j$.
-
When I posted an erroneous version of this answer earlier today it garnered several votes. How do I give those votes back? – Tom Goodwillie Feb 28 at 17:47
@Tom: For the `$3 \times 3$` case here, I agree one doesn't need the congruence subgroup property (to find a subgroup of index 7, in particular). I wanted to conceptualize the approach to Chevalley groups of rank >1 over `$\mathbb{Z}$`. But ruling out index below 7 here gets messy. Your argument against subgroups of small index looks most efficient. Note: the commutator relations here (which give a presentation with one more relation added) go back to Nielsen-Magnus; see Steinberg's Yale lectures (p. 96) or Cor. 10.3 in Milnor's Introduction to Algebraic K-Theory. – Jim Humphreys Feb 28 at 18:40
P.S. An interesting conference article by Steinberg has related results. For example, his Theorem 3 should reduce the problem here to looking at small index subgroups of `$\mathrm{SL}(3,2)$`: R.Steinberg, Some consequences of the elementary relations in `$\mathrm{SL}_n$`. Finite groups—-coming of age (Montreal, Que., 1982), 335–350, Contemp. Math., 45, Amer. Math. Soc., Providence, RI, 1985. – Jim Humphreys Mar 1 at 21:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 85, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8855851292610168, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?p=2820867
|
Physics Forums
## Equation of the circumference of an ellipse parametric equations
1. The problem statement, all variables and given/known data
Consider the ellipse given by the parametric equation x=3cos(t) y=sin(t) 0$$\leq$$t$$\leq$$2$$\Pi$$. Set up an integral that gives the circumference of the ellipse. Also find the area enclosed by the ellipse.
2. Relevant equations
$$\int$$$$\sqrt{1+(dy/dx)^2}$$dt
3. The attempt at a solution
$$\int$$$$\sqrt{1+(-2/3 cot(t))^2}$$dt It should also be the integral from 0 to 2$$\Pi$$ I'm not sure what I did wrong but I know that -2/3 cot(t) is not right.
area: A=2$$\int$$1/2 (2/3 tan(t))dt
=$$\int2/3 tan(t)dt$$
=-2/3ln(lcos(t)l) evaluated from $$\Pi$$ to 0
I know that I got something wrong here too, and I assume it is the 2/3 tan(t) but I'm not sure what I did wrong again.
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Your formula for arc length is wrong. It's $$\int \sqrt{1 + (\frac{dy}{dx})^2} \ dx$$ $$= \int \sqrt{1 + (\frac{\frac{dy}{dt}}{\frac{dx}{dt}})^2} \ \frac{dx}{dt}dt$$ EDIT: Pathetic lapse of judgement on my part >_>...ignore this post.
Mentor Arc length for a parametrized curve can also be written this way: $$\int_a^b \sqrt{(\frac{dx}{dt})^2 + (\frac{dy}{dt})^2}dt$$
## Equation of the circumference of an ellipse parametric equations
[tex]
= \int \sqrt{1 + (\frac{\frac{dy}{dt}}{\frac{dx}{dt}})^2} \ \frac{dx}{dt}dt
[/tex]
how did you get that and how do you plan to integrate it? :)
But as far as I know the ds element for parametrization: f(x(t),y(t)) = x(t) + y(t) is:
[tex]
\int f(x,y) ds = \int f(x(t),y(t)) \sqrt{\left ( \frac{dx}{dt} \right )^2 + \left ( \frac{dy}{dt} \right )^2}dt
[/tex]
OK the way I did it I actually used $$\int$$$$\sqrt{1+dy/dt/dx/dt}$$ and that's how I came up with the -$$\frac{2}{3}$$cot(t)2 I had -$$\frac{2}{3}$$$$\frac{cos(t)}{sin(t)}$$ and I simplified it to cot(t). Would this even be the right formula to use for the circumference? I thought that arc length would be the right choice as long as I evaluated it from 0 to 2$$\Pi$$.
Mentor
Quote by mickellowery OK the way I did it I actually used $$\int$$$$\sqrt{1+dy/dt/dx/dt}$$
This isn't the right formula for arc length. If you simplify Raskolnikov's formula in post 2, you get the one I showed in the next post.
Quote by mickellowery and that's how I came up with the -$$\frac{2}{3}$$cot(t)2 I had -$$\frac{2}{3}$$$$\frac{cos(t)}{sin(t)}$$ and I simplified it to cot(t). Would this even be the right formula to use for the circumference? I thought that arc length would be the right choice as long as I evaluated it from 0 to 2$$\Pi$$.
Oh geez I just noticed a typo in the original problem. It should be x=3cos(t) y=2sin(t) not y=sin(t) sorry about that.
Alright so with the correct equations would the proper integral for the circumference be: $$\int$$$$\sqrt{(-3sin(t))^2 +(2cos(t))^2}$$dt And then for the area enclosed by the ellipse would I use $$\int$$(3cos(t)-2sin(t))2 evaluated from 0 to $$\Pi$$?
Mentor You should simplify the integrand. Also, the limits are from 0 to $2\pi$. Tip: Put all your LaTeX code inside one pair of tex tags. Instead of this: $$\int$$$$\sqrt{(-3sin(t))^2 +(2cos(t))^2}$$ do this: $$\int \sqrt{(-3sin(t))^2 +(2cos(t))^2}dt$$
Recognitions: Homework Help Science Advisor You should probably also notice that they only asked you so set up the circumference integral, not solve it. It's an elliptic integral. It's not elementary. But you should be able to solve the area integral.
Thread Tools
| | | |
|---------------------------------------------------------------------------------------|----------------------------------|---------|
| Similar Threads for: Equation of the circumference of an ellipse parametric equations | | |
| Thread | Forum | Replies |
| | Precalculus Mathematics Homework | 3 |
| | Calculus & Beyond Homework | 7 |
| | Introductory Physics Homework | 3 |
| | Introductory Physics Homework | 2 |
| | General Math | 1 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 33, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8967073559761047, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/104467/a-homotopy-equivalence-between-total-spaces-in-a-hurewicz-fibration-which-is-no/104469
|
## A homotopy equivalence between total spaces in a (Hurewicz) fibration which is not a fiber homotopy equivalence
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
In Hatcher's Algebraic Topology book it is noted after 4.61 that:
fiber preserving map + homotopy equivalence $\Rightarrow$ fiber homotopy equivalence.
## Question:
Could there be two fibrations over the same base space where the total spaces are homotopy equivalent, but there is no fiber homotopy equivalence between them? (and therefore also no fiber preserving map)
If so, I would be glad to have a simple example.
-
## 4 Answers
You can fiber a circle over a circle in many ways.
-
Could you please give some more detail? – Shlomi A Aug 11 at 18:01
A two to one map (covering space) from circle to circle is a fibration, in which every fiber has two points. The identity map is another, in which every fiber has one point. – Tom Goodwillie Aug 11 at 19:03
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Let X be any non-contractible space. Let the base of your fibration be the disjoint union of countably many copies of X and countably many copies of the point. Let one fibration be the identity and the other be the identity over every component but one point; put X above that point. The two spaces are both abstractly homeomorphic to the base, but a fiber homotopy equivalence would have to be a homotopy equivalence between X and the point over that particular component.
-
Right. Thanks! :) – Shlomi A Aug 11 at 18:05
There are lots of such examples, but here is the simplest one I know. Let $M_{p,q}$ be the total space of the principal circle bundle over $S^2\times S^2$ with Euler class $(p, q)$.
If $p,q$ are relatively prime, then it is known that $M_{p,q}$ is diffeomorphic to $S^2\times S^3$. Namely, Smale proved in his paper "On the structure of 5-manifolds" that the diffeomorphism type of closed, simply-connected, spin 5-manifolds is determined by the second cohomology which is $\mathbb Z$ for $M_{p,q}$ and also for $S^2\times S^3$. (If you have trouble showing $M_{p,q}$ satisfies the above conditions, see Wang-Ziller's paper "Einstein metrics on principal torus bundles."
On the other hand, the fiber homotopy equivalence in this case preserves the Euler class (up to sign).
-
I think James-Whitehead are quite relevant:
MR0068836 Reviewed James, I. M.; Whitehead, J. H. C. The homotopy theory of sphere bundles over spheres. II. Proc. London Math. Soc. (3) 5, (1955). 148–166.
MR0061838 Reviewed James, I. M.; Whitehead, J. H. C. The homotopy theory of sphere bundles over spheres. I. Proc. London Math. Soc. (3) 4, (1954). 196–218.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9158996939659119, "perplexity_flag": "head"}
|
http://mathhelpforum.com/number-theory/145584-number-theory.html
|
# Thread:
1. ## Number Theory
An unrestricted partition (order does not count, equality of sizes is ok) partition is called self-conjugate if it is identical with its conjugate. E. g 8 = 4 + 2 + 1 + 1.
Show that the number of self conjugate unrestricted partitions of n is equal to the number of partitions of n into distinct odd parts
(optional) express this result as an identity of generating functions.
2. Originally Posted by Waikato
An unrestricted partition (order does not count, equality of sizes is ok) partition is called self-conjugate if it is identical with its conjugate. E. g 8 = 4 + 2 + 1 + 1.
Show that the number of self conjugate unrestricted partitions of n is equal to the number of partitions of n into distinct odd parts
(optional) express this result as an identity of generating functions.
A partition $\lambda$ is self-conjugate if $\lambda=\lambda'$ in terms of Ferrers diagram.
The bijection can be shown graphically. See wiki.
The generating function for this is $\prod_{\text{k=odd}}(1+x^k)=(1+x)(1+x^3)(1+x^5) \cdots$.
3. ## partition
Hello,
Have you shown that the number of self conjugate unrestricted partitions of n is equal to the number of partitions of n into distinct odd parts?
4. Originally Posted by Waikato
Hello,
Have you shown that the number of self conjugate unrestricted partitions of n is equal to the number of partitions of n into distinct odd parts?
Yes. If your definition of an "unrestricted" partition is simply a partition without any further constraint being given.
See the claim in the above link:
Claim: The number of self-conjugate partitions is the same as the number of partitions with distinct odd parts.
For instance, (5, 5, 4, 3, 2) |- n, where n=19, is a self-conjugate partition. It converts into the 9+7+3=19. Read the link I provided. It will give you some explanations graphically.
The link I provided only shows the sketch of the proof. I encourage you to make a full proof on your own by using the sketch of the proof in the link.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.885049045085907, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/103715/list
|
## Return to Answer
2 fixed upper endpoint of last integral
Let me set
$$k^2=\frac{4ar}{(r+a)^2} <1.$$
If we replace $\theta$ with $2\theta$ we reduce this to an integral;
$$2 \underbrace{\int_0^\pi \sqrt{1-k^2\cos^2\theta} d\theta}_{=I}.$$
Now set
$$x=\cos\theta$$
so that $$dx=-\sqrt{1-x^2} d\theta$$
and
$$I=\int_{-1}^1\frac{\sqrt{1-k^2x^2}}{\sqrt{1-x^2}} dx= 2\underbrace{\int_0^2 2\underbrace{\int_0^1 \frac{\sqrt{1-k^2x^2}}{\sqrt{1-x^2}} dx}_{=:E_2(k)}.$$
The integral $E_2(k)$ is called Jacobi's complete elliptic integral of the second kind. There is no simple formula for it but you can have a look at the beautiful book by H. McKean and V. Moll, Elliptic Curves. Function Theory, Geometry Arithmetic Cambridge University Press,1997.
1
Let me set
$$k^2=\frac{4ar}{(r+a)^2} <1.$$
If we replace $\theta$ with $2\theta$ we reduce this to an integral;
$$2 \underbrace{\int_0^\pi \sqrt{1-k^2\cos^2\theta} d\theta}_{=I}.$$
Now set
$$x=\cos\theta$$
so that $$dx=-\sqrt{1-x^2} d\theta$$
and
$$I=\int_{-1}^1\frac{\sqrt{1-k^2x^2}}{\sqrt{1-x^2}} dx= 2\underbrace{\int_0^2 \frac{\sqrt{1-k^2x^2}}{\sqrt{1-x^2}} dx}_{=:E_2(k)}.$$
The integral $E_2(k)$ is called Jacobi's complete elliptic integral of the second kind. There is no simple formula for it but you can have a look at the beautiful book by H. McKean and V. Moll, Elliptic Curves. Function Theory, Geometry Arithmetic Cambridge University Press,1997.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8613782525062561, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/76071/question-about-the-definition-of-hamiltonian-group-action/76092
|
## Question about the definition of hamiltonian group action.
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
So I'm reading the part in Ana Cannas da Silva's book "Lectures on Symplectic Geometry" available (on her website) about hamiltonian group actions on a symplectic manifold. She starts by defining $\mathbb{R}$-actions and $S¹$ actions by saying that the vector field on M that they generate must be hamiltonian. Then she defines hamiltonian $T^n$-actions (p.154) by the requirement that the restriction of the action to each circle ${1} \times\ldots\times S¹\times\ldots\times{1}$ be hamiltonian (plus the requirement that each of the n corresponding hamiltonian functions be invariant une the action of the rest of $T^n$)
And then, finally, she defines a hamiltonian action of a general Lie group G as one having a "moment map". This is a natural generalisation because the existence of a moment map is equivalent (I believe!) to the fact that for each $X\in Lie(G)$, the vector field $X^*$ on M induced by $X$ is hamiltonian (i.e. $X^*_p=\frac{d}{dt}(\exp(tX)\cdot p)(0)$).
For indeed, if that it that case, then on can just define the moment map $\mu : M \rightarrow Lie(G)^*$ by setting $<\mu(p),X>:=\mu^X(p)$, where $\mu^X$ is a hamiltonian function for $X^*$ chosen so that $\mu$ is G-equivariant with respect to the coadjoint action on $Lie(G)^*$. (Note that in the case where $G$ is commutative such as $G=T^n$, this last condition boils down to $\mu$ being $G$-invariant.)
The question: I am trying to prove that the ad-hoc definition implies the general definition in the $T^n$ case. The problem I am having is that we only know that n vector fields $X_1^*,\ldots,X_n^*$ (one for each subcircle $1\times\ldots\times S¹\times\ldots\times 1 \subset T^n$) are hamiltonian. Knowing that the $X_i$'s form a basis of $Lie(T^n)$, does this imply that every $X\in Lie(T^n)$ induces a hamiltonian $X^*$. Does something like '$(X+Y)^* =X^* +Y^*$' hold?
More generally, given a Lie group $G$ acting on a symplectic manifold $M$, is it necessary to check that each $X\in Lie(G)$ induce a hamiltonian vector field on $M$, or is it sufficient to check this for a basis of $Lie(G)$ ?
Thanks.
-
## 3 Answers
Just so you're aware, not every author insists that a momentum map be infinitesimally equivariant (Prof. Figueroa-O'Farrill's condition 2), although it is part of da Silva's definition (edit: actually, on checking, da Silva requires the slightly stronger condition of equivariance, i.e. $\mu(g\cdot p)=\mu(p)\circ\mathrm{Ad}_{g^{-1}}$). The literature isn't uniform - for example, Marsden just requires the first condition. I'm biased towards this definition since Jerry Marsden, who unfortunately passed away a year ago today, was my advisor.
To answer your main question, yes the map $X\in\mathfrak{g}\mapsto X^* \in\mathfrak{X}(M)$ is always linear, regardless of whether the action is Hamiltonian. To see this explicitly, let $\Phi^p:G\rightarrow M$ be the map $g\mapsto g\cdot p$. By definition $X^* _p$ is precisely $T_e\Phi^p(X)$ (you can see this agrees with your definition by writing $\exp(tX)\cdot p$ as $\Phi^p(\exp(tX))$, and using the chain rule to calculate $\frac{d}{dt}$), and derivative maps $T_xf$ are always linear. So yes, it's enough to check condition 1 on a basis.
-
Ah yes, I was missing this insights. Thank you Paul. – Gigou Sep 21 2011 at 23:50
Glad it helped :) – Paul Skerritt Sep 21 2011 at 23:52
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Your belief is only partially correct. The existence of a momentum map requires two conditions:
1. First, as you point out, if $X^*$ is the fundamental vector field corresponding to $X \in \mathfrak{g}$, then $i_{X^*} \omega = d\mu^X$ should be exact. This defines $\mu^X$ up to a locally constant function.
2. But also you need equivariance, which boils down to $\lbrace \mu^X,\mu^Y \rbrace = \mu^{[X,Y]}$.
This second condition is not automatic, and indeed it is possible that the first condition holds, but not the second. The reason is that from the first condition it follows that $$c(X,Y) := \lbrace \mu^X,\mu^Y \rbrace - \mu^{[X,Y]}$$ is a locally constant function. It follows that $c$ so defined is a Lie algebra cocycle, whence it defines a class in the Lie algebra cohomology $H^2(\mathfrak{g};H^0(M))$, where $H^0(M)$ is the trivial $\mathfrak{g}$-module of locally constant functions.
Concerning your actual question, the map $X \mapsto X^*$ is a Lie algebra homomorphism $\mathfrak{g} \to \mathfrak{X}(M)$ to the vector fields on $M$ and hence, in particular, it is linear. The map $c$ is clearly then bilinear, so again it is enough to check on a basis.
-
Hello José, thanks for answering my question. I am not saying that 1. implies 2., but I am saying that the locally constant function in 1. can be chosen so that 2. is satisfied. Is this false? (Sorry, I don't understand the Lie algebra cohomology argument.) Now concerning my actual question, which you answer by the affirmative (i.e. (X+Y)*=X*+Y*), can you provide an explanation or a reference? Can you read this property just from properties of the exponential map? – Gigou Sep 21 2011 at 20:45
By the way, just so we're clear, I agree that X-->X* is a Lie algebra homomorphism PROVIDED the action is hamiltonian (i.e., provided there exists a moment map for it, and in particular, provided X* is hamiltonian for each X in Lie(G)). But of course this is irrelevant for my question because I work with the hypothesis that I have only a basis of X's whose X*'s are hamiltonian. And using only this hypothesis, I want to show that every Y in Lie(G) induces a hamiltonian Y*. One way to do this is if we had (X+Y)*=X*+Y*, but without a moment map, how do we prove this? – Gigou Sep 21 2011 at 21:27
The locally function in 1. cannot always be chosen so that 2. is satisfied. The obstruction is the class of the cocycle $c$ in the Lie algebra cohomology and that need not be zero. In a way, of course, this is just a tautology, but phrasing it in terms of Lie algebra cohomology allows you to conclude that in many cases (e.g., semisimple Lie algebras) there is no obstruction. The map $X \to X^*$ is a Lie algebra homomorphism independently of whether the action is hamiltonian or even symplectic. It's simply the fact that you have an action of a Lie group on a manifold. – José Figueroa-O'Farrill Sep 21 2011 at 22:32
"locally function" of course means "locally constant function". (I wish I could edit comments...) – José Figueroa-O'Farrill Sep 21 2011 at 22:32
Hey. I thought I was done with this but it came back to haunt me! You said the map X-->X* is a Lie algebra homomorphism, and I thought I could prove this using Paul Skerritt's insight that $X^* =\Phi^p_*(X)$ by using the fact that if $V_i$, $W_i$ (i=1,2) are F-related at p, then so are $[V_1,V_2]$ and $[W_1,W_2]$. Hence, $[X,Y]^*_p=\Phi^p_*([X,Y]_e)=[\Phi^p_*(X),\Phi^p_*(Y)]_p=[X^*,Y^*]_p$. BUT, Silva says that X-->X* is supposed to be a Lie-algebra anti-homomorphism. And if I assume the existence of a moment map (as per your definition), then using that $f\rightarrow X_f$ is a – Gigou Sep 26 2011 at 21:50
show 2 more comments
-
Thanks Igor, unfortunately, the information I am after is dismissed with "The moment map property is again easy to check." on page 8. – Gigou Sep 21 2011 at 20:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 64, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9215124249458313, "perplexity_flag": "head"}
|
http://mathhelpforum.com/pre-calculus/90866-simple-log-question-algorithm.html
|
# Thread:
1. ## Simple log question for algorithm
I'm terrible with logs (just can't get my head around them).
The question is about big-O notation but the part I don't understand is about logarithms.
I've been told the following:
$\log_3 N = O(\log_2 N)$ because $\log_3 N = \log_3 2 \times \log_2 N$.
I don't understand why. Can someone go over the simple rules of why that is (the bit after "because")?
And isn't the implication of that, that any $\log_b N$ will be $O(\log_2 N)$
2. Originally Posted by Pan
I'm terrible with logs (just can't get my head around them).
The question is about big-O notation but the part I don't understand is about logarithms.
I've been told the following:
$\log_3 N = O(\log_2 N)$ because $\log_3 N = \log_3 2 \times \log_2 N$.
I don't understand why. Can someone go over the simple rules of why that is (the bit after "because")?
And isn't the implication of that, that any $\log_b N$ will be $O(\log_2 N)$
$\log_3{2} \cdot \log_2{N} =$
change of base for the 2nd factor ...
$\log_3{2} \cdot \frac{\log_3{N}}{\log_3{2}} = \log_3{N}$
3. Originally Posted by skeeter
$\log_3{2} \cdot \log_2{N} =$
change of base for the 2nd factor ...
$\log_3{2} \cdot \frac{\log_3{N}}{\log_3{2}} = \log_3{N}$
If you had $log_b{N}$
Can it be made equal to $\log_b{c} \cdot \log_2{N}$, where $c$ is just some number that doesn't change.
In other words, any log can be made so that the dominant factor (for large N) is a base 2 log.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9781505465507507, "perplexity_flag": "middle"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.