url
stringlengths 14
1.76k
| text
stringlengths 100
1.02M
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|
https://nips.cc/Conferences/2019/ScheduleMultitrack?event=14245
|
Poster
Fast and Furious Learning in Zero-Sum Games: Vanishing Regret with Non-Vanishing Step Sizes
James Bailey · Georgios Piliouras
Wed Dec 11th 10:45 AM -- 12:45 PM @ East Exhibition Hall B + C #217
We show for the first time that it is possible to reconcile in online learning in zero-sum games two seemingly contradictory objectives: vanishing time-average regret and non-vanishing step sizes. This phenomenon, that we coin fast and furious" learning in games, sets a new benchmark about what is possible both in max-min optimization as well as in multi-agent systems. Our analysis does not depend on introducing a carefully tailored dynamic. Instead we focus on the most well studied online dynamic, gradient descent. Similarly, we focus on the simplest textbook class of games, two-agent two-strategy zero-sum games, such as Matching Pennies. Even for this simplest of benchmarks the best known bound for total regret, prior to our work, was the trivial one of $O(T)$, which is immediately applicable even to a non-learning agent. Based on a tight understanding of the geometry of the non-equilibrating trajectories in the dual space we prove a regret bound of $\Theta(\sqrt{T})$ matching the well known optimal bound for adaptive step sizes in the online setting. This guarantee holds for all fixed step-sizes without having to know the time horizon in advance and adapt the fixed step-size accordingly.As a corollary, we establish that even with fixed learning rates the time-average of mixed strategies, utilities converge to their exact Nash equilibrium values. We also provide experimental evidence suggesting the stronger regret bound holds for all zero-sum games.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9209145903587341, "perplexity": 887.8902950328276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347388012.14/warc/CC-MAIN-20200525063708-20200525093708-00041.warc.gz"}
|
http://cds.cern.ch/collection/ATLAS%20Papers?ln=zh_CN
|
ATLAS Papers
2019-07-16
08:44
Measurement of the inclusive cross-section for the production of jets in association with a $Z$ boson in proton-proton collisions at 8 TeV using the ATLAS detector The inclusive cross-section for jet production in association with a $Z$ boson decaying into an electron-positron pair is measured as a function of the transverse momentum and the absolute rapidity of jets using 19.9 fb$^{-1}$ of $\sqrt{s} = 8$ TeV proton-proton collision data collected with the ATLAS detector at the Large Hadron Collider. [...] CERN-EP-2019-133. - 2019. Fulltext - Previous draft version
2019-07-14
21:32
Searches for lepton-flavour-violating decays of the Higgs boson in $\sqrt{s}=13$ TeV pp collisions with the ATLAS detector This Letter presents direct searches for lepton flavour violation in Higgs boson decays, $H\rightarrow e\tau$ and $H\rightarrow\mu\tau$, performed with the ATLAS detector at the LHC. [...] CERN-EP-2019-126. - 2019. Fulltext - Previous draft version
2019-07-11
15:01
Measurement of flow harmonics correlations with mean transverse momentum in lead-lead and proton-lead collisions at $\sqrt{s_{NN}}=5.02$ TeV with the ATLAS detector To assess the properties of the quark-gluon plasma formed in heavy-ion collisions, the ATLAS experiment at the LHC measures a correlation between the mean transverse momentum and the magnitudes of the flow harmonics. [...] CERN-EP-2019-130. - 2019. Fulltext - Previous draft version
2019-07-11
14:41
ATLAS $b$-jet identification performance and efficiency measurement with $t\bar{t}$ events in $pp$ collisions at $\sqrt{s}=13$ TeV / ATLAS Collaboration The algorithms used by the ATLAS Collaboration during Run 2 of the Large Hadron Collider to identify jets containing $b$-hadrons are presented. [...] arXiv:1907.05120 ; CERN-EP-2019-132. - 2019. - 52 p. Fulltext - Previous draft version - Fulltext
2019-07-08
16:15
Measurement of $W^{\pm}$-boson and $Z$-boson production cross-sections in $pp$ collisions at $\sqrt{s}=2.76$ TeV with the ATLAS detector / ATLAS Collaboration The production cross-sections for $W^{\pm}$ and $Z$ bosons are measured using ATLAS data corresponding to an integrated luminosity of 4.0 pb$^{-1}$ collected at a centre-of-mass energy $\sqrt{s}=2.76$ TeV. [...] arXiv:1907.03567 ; CERN-EP-2019-095. - 2019. - 41 p. Fulltext - Previous draft version - Fulltext
2019-07-05
12:46
Search for heavy neutral Higgs bosons produced in association with $b$-quarks and decaying to $b$-quarks at $\sqrt{s}=13$ TeV with the ATLAS detector / ATLAS Collaboration A search for heavy neutral Higgs bosons produced in association with one or two $b$-quarks and decaying to $b$-quark pairs is presented using 27.8 fb$^{-1}$ of $\sqrt{s}=13$ TeV proton-proton collision data recorded by the ATLAS detector at the Large Hadron Collider during 2015 and 2016. [...] arXiv:1907.02749 ; CERN-EP-2019-092. - 2019. - 46 p. Fulltext - Previous draft version - Fulltext
2019-06-28
21:50
Resolution of the ATLAS muon spectrometer monitored drift tubes in LHC Run 2 / ATLAS Collaboration The momentum measurement capability of the ATLAS muon spectrometer relies fundamentally on the intrinsic single-hit spatial resolution of the monitored drift tube precision tracking chambers. [...] arXiv:1906.12226 ; CERN-EP-2019-091. - 2019. - 36 p. Fulltext - Previous draft version - Fulltext
2019-06-27
06:56
Identification of boosted Higgs bosons decaying into $b$-quark pairs with the ATLAS detector at 13 TeV / ATLAS Collaboration This paper describes a study of techniques for identifying Higgs bosons at high transverse momenta decaying into bottom-quark pairs, $H \rightarrow b\bar{b}$, for proton-proton collision data collected by the ATLAS detector at the Large Hadron Collider at a centre-of-mass energy $\sqrt{s}=13$ TeV. [...] arXiv:1906.11005 ; CERN-EP-2019-085. - 2019. - 54 p. Fulltext - Previous draft version - Fulltext
2019-06-22
21:39
Properties of jet fragmentation using charged particles measured with the ATLAS detector in $pp$ collisions at $\sqrt{s}=13$ TeV / ATLAS Collaboration This paper presents a measurement of quantities related to the formation of jets from high-energy quarks and gluons (fragmentation). [...] arXiv:1906.09254 ; CERN-EP-2019-090. - 2019. - 56 p. Fulltext - Previous draft version - Fulltext
2019-06-21
08:45
Search for diboson resonances in hadronic final states in 139 fb$^{-1}$ of $pp$ collisions at $\sqrt{s} = 13$ TeV with the ATLAS detector / ATLAS Collaboration Narrow resonances decaying into $WW$, $WZ$ or $ZZ$ boson pairs are searched for in 139 fb$^{-1}$ of proton-proton collision data at a centre-of-mass energy of $\sqrt{s}=13$ TeV recorded with the ATLAS detector at the Large Hadron Collider from 2015 to 2018. [...] arXiv:1906.08589 ; CERN-EP-2019-044. - 2019. - 41 p. Fulltext - Previous draft version - Fulltext
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9988129138946533, "perplexity": 2218.330455544922}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524568.14/warc/CC-MAIN-20190716135748-20190716161748-00407.warc.gz"}
|
http://gmatclub.com/forum/in-the-xy-plane-if-line-k-has-negative-slope-and-passes-128380.html
|
Find all School-related info fast with the new School-Specific MBA Forum
It is currently 30 Jul 2016, 06:55
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# In the xy-plane, if line k has negative slope and passes
Author Message
TAGS:
### Hide Tags
Manager
Joined: 08 Jun 2011
Posts: 98
Followers: 1
Kudos [?]: 36 [0], given: 65
In the xy-plane, if line k has negative slope and passes [#permalink]
### Show Tags
29 Feb 2012, 13:18
7
This post was
BOOKMARKED
00:00
Difficulty:
(N/A)
Question Stats:
87% (02:02) correct 13% (00:38) wrong based on 45 sessions
### HideShow timer Statistics
Whenever I see a similar question, my mind freezes and I go into fetal position. I am unable to picture them and even if sketch them, I am still confused as to how approach them.
I know how to get a slope.
I know the basic equation of a line.
I know how a prep bisector works.
I know the distance between two points.
I know how to get the mid between two points.
I am just unable to lump all these together and solve these questions quick enough.
Here are some examples of these questions:
In the xy-plane, if line k has negative slope and passes through the point (−5,r ), is the x-intercept of line k positive?
(1) The slope of line k is –5.
(2) r > 0
Official answer for this one is :
[Reveal] Spoiler:
E
In the rectangular coordinate system, are the points
(r,s) and (u,v ) equidistant from the origin?
(1) r + s = 1
(2) u = 1 – r and v = 1 – s
OA:
[Reveal] Spoiler:
C
If line k in the xy-plane has equation y = mx + b, where
m and b are constants, what is the slope of k ?
(1) k is parallel to the line with equation
y = (1 – m)x + b + 1.
(2) k intersects the line with equation y = 2x + 3 at
the point (2,7).
OA :
[Reveal] Spoiler:
A
In the XY plane, region R consists of all the points (x,y) such that 2x+3y<=6. Is the point (r,s) in region R?
1. 3r+2s=6
2. r<=3 & s<=2
OA :
[Reveal] Spoiler:
E
These questions seem to pop up on every GMAT prep I took.
Any helps or tricks are appreciated. If I can't thank you in a post, I will make sure I kudos you.
Last edited by Bunuel on 29 Feb 2012, 13:59, edited 1 time in total.
Topic is locked. The links to the open discussions of these questions are given in the posts below.
Math Expert
Joined: 02 Sep 2009
Posts: 34120
Followers: 6109
Kudos [?]: 76923 [5] , given: 9992
Re: These questions really scare me - coordinate Geom. [#permalink]
### Show Tags
29 Feb 2012, 13:34
5
KUDOS
Expert's post
4
This post was
BOOKMARKED
1. In the xy-plane, if line k has negative slope and passes through the point (-5,r), is the x-intercept of line k positive?
This question can be done with graphic approach (just by drawing the lines) or with algebraic approach.
Algebraic approach:
Equation of a line in point intercept form is $$y=mx+b$$, where: $$m$$ is the slope of the line, $$b$$ is the y-intercept of the line (the value of $$y$$ for $$x=0$$), and $$x$$ is the independent variable of the function $$y$$.
We are told that slope of line $$k$$ is negative ($$m<0$$) and it passes through the point (-5,r): $$y=mx+b$$ --> $$r=-5m+b$$.
Question: is x-intercept of line $$k$$ positive? x-intercep is the value of $$x$$ for $$y=0$$ --> $$0=mx+b$$ --> is $$x=-\frac{b}{m}>0$$? As we know that $$m<0$$, then the question basically becomes: is $$b>0$$?.
(1) The slope of line $$k$$ is -5 --> $$m=-5<0$$. We've already known that slope was negative and there is no info about $$b$$, hence this statement is insufficient.
(2) $$r>0$$ --> $$r=-5m+b>0$$ --> $$b>5m=some \ negative \ number$$, as $$m<0$$ we have that $$b$$ is more than some negative number ($$5m$$), hence insufficient, to say whether $$b>0$$.
(1)+(2) From (1) $$m=-5$$ and from (2) $$r=-5m+b>0$$ --> $$r=-5m+b=25+b>0$$ --> $$b>-25$$. Not sufficient to say whether $$b>0$$.
Graphic approach:
If the slope of a line is negative, the line WILL intersect quadrants II and IV. X and Y intersects of the line with negative slope have the same sign. Therefore if X and Y intersects are positive, the line intersects quadrant I; if negative, quadrant III.
When we take both statement together all we know is that slope is negative and that it crosses some point in II quadrant (-5, r>0) (this info is redundant as we know that if the slope of the line is negative, the line WILL intersect quadrants II). Basically we just know that the slope is negative - that's all. We can not say whether x-intercept is positive or negative from this info.
Below are two graphs with positive and negative x-intercepts. Statements that the slope=-5 and that the line crosses (-5, r>0) are satisfied.
$$y=-5x+5$$:
Attachment:
1.png [ 9.73 KiB | Viewed 9878 times ]
$$y=-5x-20$$:
Attachment:
2.png [ 10.17 KiB | Viewed 9870 times ]
More on this please check Coordinate Geometry chapter of Math Book: math-coordinate-geometry-87652.html
In case of any question please post it here: in-the-xy-plane-if-line-k-has-negative-slope-and-passes-110044.html
Hope it helps.
_________________
Math Expert
Joined: 02 Sep 2009
Posts: 34120
Followers: 6109
Kudos [?]: 76923 [2] , given: 9992
Re: These questions really scare me - coordinate Geom. [#permalink]
### Show Tags
29 Feb 2012, 13:35
2
KUDOS
Expert's post
1
This post was
BOOKMARKED
2. In the rectangular coordinate system, are the points (r,s) and (u,v) equidistant from the origin?
(1) r + s = 1
(2) u = 1 - r and v = 1 - s
Distance between the point A (x,y) and the origin can be found by the formula: $$D=\sqrt{x^2+y^2}$$.
Basically the question asks is $$\sqrt{r^2+s^2}=\sqrt{u^2+v^2}$$ OR is $$r^2+s^2=u^2+v^2$$?
(1) $$r+s=1$$, no info about $$u$$ and $$v$$;
(2) $$u=1-r$$ and $$v=1-s$$ --> substitute $$u$$ and $$v$$ and express RHS using $$r$$ and $$s$$ to see what we get: $$RHS=u^2+v^2=(1-r)^2+(1-s)^2=2-2(r+s)+ r^2+s^2$$. So we have that $$RHS=u^2+v^2=2-2(r+s)+ r^2+s^2$$ and thus the question becomes: is $$r^2+s^2=2-2(r+s)+ r^2+s^2$$? --> is $$r+s=1$$? We don't know that, so this statement is not sufficient.
(1)+(2) From (2) question became: is $$r+s=1$$? And (1) says that this is true. Thus taken together statements are sufficient to answer the question.
In case of any question please post it here: in-the-rectangular-coordinate-system-are-the-points-r-s-92823.html
_________________
Math Expert
Joined: 02 Sep 2009
Posts: 34120
Followers: 6109
Kudos [?]: 76923 [2] , given: 9992
Re: These questions really scare me - coordinate Geom. [#permalink]
### Show Tags
29 Feb 2012, 13:37
2
KUDOS
Expert's post
If Line k in the xy-plane has equation y = mx + b, where m and b are constants, what is the slope of k?
$$y=mx+b$$ is called point-intercept form of equation of a line. Where: $$m$$ is the slope of the line; $$b$$ is the y-intercept of the line; $$x$$ is the independent variable of the function $$y$$.
So we are asked to find the value of $$m$$.
(1) k is parallel to the line with equation y = (1-m)x + b +1 --> parallel lines have the same slope --> slope of this line is $$1-m$$, so $$1-m=m$$ --> $$m=\frac{1}{2}$$. Sufficient.
(2) k intersects the line with equation y = 2x + 3 at the point (2, 7) --> so line k contains the point (2,7) --> $$7=2m+b$$ --> can not solve for $$m$$. Not sufficient.
In case of any question please post it here: if-line-k-in-the-xy-plane-has-equation-y-mx-b-where-m-100295.html
_________________
Math Expert
Joined: 02 Sep 2009
Posts: 34120
Followers: 6109
Kudos [?]: 76923 [2] , given: 9992
Re: These questions really scare me - coordinate Geom. [#permalink]
### Show Tags
29 Feb 2012, 13:41
2
KUDOS
Expert's post
In the xy-plane, region R consists of all the points (x, y) such that $$2x + 3y =< 6$$ . Is the point (r,s) in region R?
I'd say the best way for this question would be to try boundary values.
Q: is $$2r+3s\leq{6}$$?
(1) $$3r + 2s = 6$$ --> very easy to see that this statement is not sufficient:
If $$r=2$$ and $$s=0$$ then $$2r+3s=4<{6}$$, so the answer is YES;
If $$r=0$$ and $$s=3$$ then $$2r+3s=9>6$$, so the answer is NO.
Not sufficient.
(2) $$r\leq{3}$$ and $$s\leq{2}$$ --> also very easy to see that this statement is not sufficient:
If $$r=0$$ and $$s=0$$ then $$2r+3s=0<{6}$$, so the answer is YES;
If $$r=3$$ and $$s=2$$ then $$2r+3s=12>6$$, so the answer is NO.
Not sufficient.
(1)+(2) We already have an example for YES answer in (1) which valid for combined statements:
If $$r=2<3$$ and $$s=0<2$$ then $$2r+3s=4<{6}$$, so the answer is YES;
To get NO answer try max possible value of $$s$$, which is $$s=2$$, then from (1) $$r=\frac{2}{3}<3$$ --> $$2r+3s=\frac{4}{3}+6>6$$, so the answer is NO.
Not sufficient.
Number picking strategy for this question is explained here: in-the-xy-plane-region-r-consists-of-all-the-points-x-y-102233.html#p795613
In case of any question pleas post it here: in-the-xy-plane-region-r-consists-of-all-the-points-x-y-102233.html
_________________
Re: These questions really scare me - coordinate Geom. [#permalink] 29 Feb 2012, 13:41
Similar topics Replies Last post
Similar
Topics:
41 In the xy-plane, if line k has negative slope and passes 16 02 Jul 2012, 02:45
In the xy-plane, if line k has a negative slope and passes 7 06 Jun 2012, 23:04
3 In the xy-plane, if line k has negative slope and passes 4 26 Feb 2011, 07:21
3 In the xy-plane, if line k has negative slope and passes 4 22 May 2010, 14:41
21 In the xy-plane, if line k has negative slope and passes 24 17 Jan 2010, 09:19
Display posts from previous: Sort by
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8670945167541504, "perplexity": 1243.334495441037}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257836399.81/warc/CC-MAIN-20160723071036-00284-ip-10-185-27-174.ec2.internal.warc.gz"}
|
https://moeller-lab.com/?p=316
|
# Division of labor in transhydrogenase by alternating proton translocation and hydride transfer.
###### Josephine H. Leung, Lici A. Schurig-Briccio, Mutsuo Yamaguchi, Arne Moeller, Jeffrey A. Speir, Robert B. Gennis, Charles D. Stout
NADPH/NADP+ (the reduced form of NADP+/nicotinamide adenine dinucleotide phosphate) homeostasis is critical for countering oxidative stress in cells. Nicotinamide nucleotide transhydrogenase (TH), a membrane enzyme present in both bacteria and mitochondria, couples the proton motive force to the generation of NADPH. We present the 2.8 Å crystal structure of the transmembrane proton channel domain of TH from Thermus thermophilus and the 6.9 Å crystal structure of the entire enzyme (holo-TH). The membrane domain crystallized as a symmetric dimer, with each protomer containing a putative proton channel. The holo-TH is a highly asymmetric dimer with the NADP(H)–binding domain (dIII) in two different orientations. This unusual arrangement suggests a catalytic mechanism in which the two copies of dIII alternatively function in proton translocation and hydride transfer.
DOI: 10.1126/science.1260451
PMID: 25574024
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8407168984413147, "perplexity": 16213.536107933194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107888402.81/warc/CC-MAIN-20201025070924-20201025100924-00248.warc.gz"}
|
http://math.ncku.edu.tw/research/talk_detail.php?id=1497
|
NCTS(South)/ NCKU Math Colloquium DATE 2014-10-30¡@16:10-17:00 PLACE R204, 2F, NCTS, NCKU SPEAKER ¤ý®¶¨k ±Ð±Â¡]¥x¤j¼Æ¾Ç¨t¡^ TITLE Quantitative Uniqueness Estimates, Landis' Conjecture, and Related Questions ABSTRACT In the late 60's, E.M. Landis conjectured that if $\Delta u+Vu=0$ in $\R^n$ with $\|V\|_{L^{\infty}(\R^n)}\le 1$ and $\|u\|_{L^{\infty}(\R^n)}\le C_0$ satisfying $|u(x)|\le C\exp(-C|x|^{1+})$, then $u\equiv 0$. Landis' conjecture was disproved by Meshkov who constructed such $V$ and nontrivial $u$ satisfying $|u(x)|\le C\exp(-C|x|^{\frac 43})$. He also showed that if $|u(x)|\le C\exp(-C|x|^{\frac 43+})$, then $u\equiv 0$. A quantitative form of Meshkov's result was derived by Bourgain and Kenig in their resolution of Anderson localization for the Bernoulli model in higher dimensions. It should be noted that both $V$ and $u$ constructed by Meshkov are \emph{complex-valued} functions. It remains an open question whether Landis' conjecture is true for real-valued $V$ and $u$. In view of Bourgain and Kenig's scaling argument, Landis' conjecture is closely related to the estimate of the maximal vanishing order of $u$ in a bounded domain. In this talk, I would like to discuss my recent joint work with Kenig and Silvestre on Landis' conjecture in two dimensions.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9205628633499146, "perplexity": 616.804044089492}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141189141.23/warc/CC-MAIN-20201127044624-20201127074624-00408.warc.gz"}
|
https://zbmath.org/?q=an%3A1073.34072
|
×
# zbMATH — the first resource for mathematics
Higher order abstract Cauchy problems: their existence and uniqueness families. (English) Zbl 1073.34072
The authors deal with the abstract Cauchy problem for higher-order linear differential equations $u^{(n)}(t)+\sum^{n-1}_{k=0}A_ku^{(k)}(t)=0,\;t\geq 0,\quad u^{(k)}(0)=u_k, \;0\leq k\leq n-1,\tag{1}$ and its inhomogeneous version, where $$A_0,\dots,A_{n-1}$$ are linear operators in a Banach space $$X$$. The authors introduce a new operator family of bounded linear operators from a Banach space $$Y$$ into $$X$$, called an existence family for (1), so that the existence and continuous dependence on initial data can be studied and some basic results in a quite general setting can be obtained. Necessary and sufficient conditions, ensuring (1) to possess an exponentially bounded existence family, are presented in terms of Laplace transforms. As applications, two concrete initial value problems for partial differential equations are studied.
##### MSC:
34G10 Linear differential equations in abstract spaces 47D06 One-parameter semigroups and linear evolution equations 35K90 Abstract parabolic equations
Full Text:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9151487946510315, "perplexity": 433.26966114240554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038468066.58/warc/CC-MAIN-20210418043500-20210418073500-00267.warc.gz"}
|
https://www.physicsforums.com/threads/higgs-and-fermion-masses.184355/
|
# Higgs and fermion masses
1. Sep 13, 2007
### arivero
Let me see if I get it right or I dreamed it: in order to give mass to a quark or a lepton the higgs field must be in the same isospin representation that the fermion, must it? IE, can a particle in a isospin triplet get mass from the minimal higgs? Or in the reverse, should a triplet higgs contribute to the mass of a standard model quark?
2. Sep 13, 2007
### BenTheMan
arivero---
The only requirement is that you make a singlet under the gauge group out of the higgs and two fermions---this is the requirement that the Lagrangian be gauge invariant. I may be wrong, but isospin isn't what you should be thinking of. You should be thinking of standard model quantum numbers, i.e. SU(3)xSU(2)xU(1).
I wrote a rather long post on mass terms and the higgs that may answer some of your questions on another forum. I'll link to it here, but I don't know if linking to another forum is exactly kosher :)
3. Sep 13, 2007
### arivero
Ah yes, I call isospin to the quantum numbers of the electroweak SU(2). Some old books name it "weak isospin" and I got hang of the name. Point is, given that the left fermion is a SU(2) doublet, are we forced to put the higgs field also in a SU(2) doublet or are there other solutions?
Similar Discussions: Higgs and fermion masses
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9450588822364807, "perplexity": 710.855446878211}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549428257.45/warc/CC-MAIN-20170727122407-20170727142407-00253.warc.gz"}
|
https://tex.stackexchange.com/questions/301969/compilation-of-several-projects-at-once
|
# Compilation of several projects at once
I search a way to (re)compile several different projects at once (all assumed to be compile individually without error). Here, I present the context and the problem.
The context:
I have a topic, e.g. "Sciences", and wrote several projects to it (articles, papers). They all share a same template directory containing the skeleton main file, a "Settings" folder, an "Article" folder, an "Images" folder and a "Vocabulary" folder, set as follow:
/my/path/Topic/TeX-Template/
Settings/
packages.sty
settings.sty
macros.sty
/Images/
image.ewm.eps
Vocabulary/
vocabulary.sty
Articles/
topic-main.tex
topic-corpus.tex
• vocabulary.sty is a collection of project specific macros, i.e. \newcommand{\technicaltermA}{technical term A}.
• topic-main.tex contains the base of the document (\documentclass{...}, etc.)
• topic-corpus.tex contains the skeleton of the project (organized by \intput{./file.tex} lines).
• Articles directory is empty, it will contains the section's files of the project, and input-ed in the topic-corpus.tex
Then, there's a little script "topic.sh" included in my $PATH that says: cp -a$TOPIC/*tex .
cp -a $TOPIC/Articles ln -s$TOPIC/Settings
ln -s $TOPIC/Images ln -s$TOPIC/Vocabulary
Of course, I created a bash variable $TOPIC pointing to /my/path/Topic/Topic-Template Once I created a NewProject folder in which I executed the script, I rename the "topic-" files to "newprojectname-". I have then many projects in the Sciences folder, for example ScienceA, ScienceB and ScienceC. They all are correctly compiled. Now the problem: I need to correct a term "technical term A" appearing in all projects, provided by a "\technicaltermA" macro from the vocabulary.sty. Therefore, I modify the macro. how do I apply the correction to all projects at once, without having to go in each folder and compile manually? It would require the program to jump into each folder, and compile each project with its own -main.tex file. (Edit: <dt> removed...) • Welcome! What is with all the <dt> etc.? Maybe I'm missing the concept.... As the answer you've got indicates, this really isn't TeX-specific. You don't have to use make to solve it, but any solution is likely to be along those lines i.e. depend on what's available on your system outside TeX Live or MikTeX or whatever you use. A make solution is worth doing if you expect to do this a lot. Or you can write a script yourself, if it is rarer and that's easier for you. Or, if it is a one-time thing, you can just use the facilities provided by your shell directly to loop through them. – cfr Apr 3 '16 at 2:22 ## 1 Answer This is not a tex question, but a general compilation question. Essentially, you have to write a makefile in each directory saying how each *-main.tex is compiled. Then you have to write a root makefile which executes make in each of the subdirectories, along the following lines: .PHONY: all default clean realclean ... default: @for i in ls | egrep "(Sciences)|(Arts)|..."; do {$(MAKE) default -C i; } done
...
Yes, this is tedious. But if you provide the right dependencies, make will take care of the rest.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4412706792354584, "perplexity": 10048.805312439423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107898577.79/warc/CC-MAIN-20201028132718-20201028162718-00184.warc.gz"}
|
http://mathoverflow.net/users/25510/alexandre-eremenko?tab=activity
|
Alexandre Eremenko
Reputation
29,650
81/100 score
12h comment Simply connected noncompact surfaces One does not need this classification to deal with simply connected case. Feb 6 comment A simple question about ordinary diffential equations of first order More advanced differential equations books treat the general case. The keyword is "singular solutions", see also "Clairaut equation". Feb 4 comment To determine if a 2 variable symmetric function is addition formula of one variable function or not? I believe that the question of what rational addition formulas are possible was completely solved by J. F. Ritt. Currently my Mathscinet is down, when it recovers I will give an exact reference. Feb 3 answered analytic continuations Feb 2 comment reference request: simple facts about vector-valued $L^p$ spaces @Jack: Dieudonné, J. Éléments d'analyse. Gauthier-Villars, Paris, 1981. Feb 2 comment reference request: simple facts about vector-valued $L^p$ spaces @Jack: Cartan, Henri Calcul différentiel. Hermann, Paris 1967 Feb 1 comment reference request: simple facts about vector-valued $L^p$ spaces Almost any French Calculus textbook: Cartan, Dieudonne, etc. Jan 29 comment Gaussian and the convex hull of moment curves You defined only ONE moment curve. What is "the set of moment curves"? Jan 28 revised Examples of pluripolar sets added 130 characters in body Jan 28 reviewed Approve Examples of pluripolar sets Jan 28 revised Examples of pluripolar sets added 444 characters in body Jan 28 answered Examples of pluripolar sets Jan 25 comment How to analytically evaluate this n-dimensional iterated integral? @Andrea Becker: that was exactly what I asked: in what sense do you want to understand your integral? Only absolutely convergent integrals are defined unambiguously. This one is not absolutely convergent. Jan 25 comment How to analytically evaluate this n-dimensional iterated integral? But it is divergent. What do you mean by "evaluate"? Jan 24 awarded Nice Answer Jan 24 answered Polynomials with the same values set on the unit circle Jan 21 revised functions with orthogonal Jacobian added 385 characters in body Jan 21 comment functions with orthogonal Jacobian I already edited my answer to address the original question as well. Jan 21 revised functions with orthogonal Jacobian added 276 characters in body Jan 21 answered functions with orthogonal Jacobian
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.768216073513031, "perplexity": 1481.81718471556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701157262.85/warc/CC-MAIN-20160205193917-00054-ip-10-236-182-209.ec2.internal.warc.gz"}
|
http://www.mathisfunforum.com/viewtopic.php?pid=281554
|
Discussion about math, puzzles, games and fun. Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ π -¹ ² ³ °
You are not logged in.
## #1 2013-08-15 03:57:24
Member
Registered: 2013-08-15
Posts: 3
### n th derivative
the problem is to prove that -∫(-1 to 1) u^(n-1)(x) u^(n+1)(x)dx = (-1)^n ∫(-1 to 1)u(x)u^(2n)(x)dx
here (n-1),(n+1),(2n) are n th derivatives and u(x) = (x²-1)^n
plz help me to prove above problem...
Offline
## #2 2013-08-15 04:29:11
bobbym
bumpkin
From: Bumpkinland
Registered: 2009-04-12
Posts: 109,606
### Re: n th derivative
Hi;
You should latex these so they are readable.
This is a good online latex creator. It uses pull down menus and produces perfect latex every time.
http://latex.codecogs.com/editor.php
That is the LHS of your question. Refresh my memory, which polynomials are u(x)?
In mathematics, you don't understand things. You just get used to them.
If it ain't broke, fix it until it is.
Always satisfy the Prime Directive of getting the right answer above all else.
Offline
## #3 2013-08-21 04:43:01
Member
Registered: 2013-08-15
Posts: 3
### Re: n th derivative
that problem u interpreted is correct, and u(x)=\left(x^2-1)^n
Offline
## #4 2013-08-21 07:27:53
bobbym
bumpkin
From: Bumpkinland
Registered: 2009-04-12
Posts: 109,606
### Re: n th derivative
Hi;
So the whole question looks like this:
In mathematics, you don't understand things. You just get used to them.
If it ain't broke, fix it until it is.
Always satisfy the Prime Directive of getting the right answer above all else.
Offline
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9726189374923706, "perplexity": 18220.616819539853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463613135.2/warc/CC-MAIN-20170529223110-20170530003110-00454.warc.gz"}
|
https://astronomy.stackexchange.com/questions/25895/is-a-supernova-powerful-enough-to-destroy-the-other-star-in-a-binary-system
|
# Is a supernova powerful enough to destroy the other star in a binary system?
(This question was originally posted via World Building, but I was instructed to post it here) https://worldbuilding.stackexchange.com/questions/109457/is-a-supernova-powerful-to-destroy-the-other-star-in-a-binary-system?noredirect=1#comment332775_109457
The question is basically the same:
Can a binary star system have one of the two stars go supernova in the first place, and if so:
The first scenario would be:
• Star A of the binary swells to red giant status, and then is either touching or consumes star B: what would happen?
Scenario 2:
• Star A goes supernova, with maybe 1 AU of distance between it and Star B.
What info Id like to know:
• Can it destroy the other star via supernova?
• What is the effective range of the supernova, and can it destroy stars a light year or more away?
• It's best to ask one question per question on Stack Exchange sites. OTOH, why do you even mention scenario 1? The red giant phase happens in the middle of a star's life, a long time before any kind of supernova can occur. BTW, there are various types of supernova, but they all require a large star, a star with the mass of our Sun can't go supernova. – PM 2Ring Apr 13 '18 at 14:44
• OTOH, you can have a binary system where the heavier star goes red giant and then eventually becomes a white dwarf. When the lighter star goes red giant the white dwarf can pull matter from the red giant, and that can lead to nova explosions, or if it pulls enough matter you can get a type Ia supernova. See en.wikipedia.org/wiki/Type_Ia_supernova – PM 2Ring Apr 13 '18 at 15:16
• @PM i mention Red Giant because the original premise is that the stars lifecycle is being artifically accelerated to an extreme degree. I mentioned 1 solar mass because typically people request that I add some sort of baseline to my question – Razmode Apr 13 '18 at 15:35
• Ok, if you can accelerate time in the star's core that may also allow you to get around the mass restriction. But it does make it hard to give a scientifically accurate answer when you bring in SciFi elements like that, and we like our answers here to be science, not SciFi. ;) – PM 2Ring Apr 13 '18 at 15:52
• Understood. I guess we can tell you about real supernovae here, and let you extrapolate. :) – PM 2Ring Apr 13 '18 at 16:30
The gravitational binding energy of a solar mass star of uniform density is $3GM^2/5R = 2.2774\times 10^{41}$ Joule. The energy of a typical supernova is $10^{44}$ J. So it looks at first glance like the explosion could disperse the other star...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4375755786895752, "perplexity": 952.0785259974967}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703499999.6/warc/CC-MAIN-20210116014637-20210116044637-00222.warc.gz"}
|
https://math.stackexchange.com/questions/1642872/how-to-find-the-index-of-following-subgroup
|
# how to find the index of following subgroup?
if I denotes the principal congurence group of level 2 i.e. $I=\{ M \in SL(2,Z) ; \:M \:\:\text{congruent to I} \mod(2)\}$.
or I= \begin{bmatrix}2\mathbb{Z}+1&2\mathbb{Z}\\2\mathbb{Z}&2\mathbb{Z}+1\end{bmatrix}
H be its subgroup \begin{bmatrix}2\mathbb{Z}+1&4\mathbb{Z}\\2\mathbb{Z}&2\mathbb{Z}+1\end{bmatrix}
how to prove above subgroup is of index 2 in I.
it seems that $4\mathbb{Z}$ and $4\mathbb{Z}+2$ will be two members of the cosets where entries written by me are a12 position of the matrices?
am i right?
$X\sim Y$ iff $XH=YH$ iff $XY^{-1}\in H$. If $X=\begin{pmatrix}p&q\\r&s\end{pmatrix},Y=\begin{pmatrix}u&v\\w&x\end{pmatrix}\in I$, then $X\sim Y$ iff $-pv+qu=0 \mod 4$.
Let $H_1$ be the equivalence class of $X=I_2$: the condition on $Y$ is $v=0\mod 4$. Let $H_2$ be the equivalence class of $X=\begin{pmatrix}1&2\\0&1\end{pmatrix}$: the condition on $Y$ is $-v+2u=0\mod 4$. Since $2u=2\mod 4$, the condition reduces to $v=2\mod 4$. Since $v$ is even, $H_1,H_2$ is a partition of $I$ and we are done.
take an element of I not in H.
a=\begin{bmatrix}1&2\\0&1\end{bmatrix}
writing I=H ∪ Ha
H=\begin{bmatrix}2\mathbb{Z}+1&4\mathbb{Z}\\2\mathbb{Z}&2\mathbb{Z}+1\end{bmatrix}
Ha=\begin{bmatrix}2\mathbb{Z}+1&2+2\mathbb{Z}\\2\mathbb{Z}&2\mathbb{Z}+1\end{bmatrix} as for any x in I
x=\begin{bmatrix}2a+1&2b\\2c&2d+1\end{bmatrix}
we have only 2 possibilities for b
if b is even then x is element of H and if it is odd then x is element of Ha proving I as union of two cosets
proving further they are disjoint is easy as
so clearly intersection is empty.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9789243936538696, "perplexity": 158.60201033929278}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251783000.84/warc/CC-MAIN-20200128184745-20200128214745-00163.warc.gz"}
|
http://math.stackexchange.com/questions/599875/how-can-i-compare-three-different-variable
|
# How can I compare three different variable?
I have to verify the presence or the the absence of a phenomenon in 3 different cases (A,B,C) and then representing the results in a single bar graph in order to compare the the results of the three cases together. The phenomenon to detect is constituted by four indicators ($I_1$, $I_2$, $I_3$, $I_4$) and in order to verify if and in which levels this phenomenon is present in each case, I have to calculate if and how much this indicators are present in every single case. In order to fulfill my aims (the verification of the presence of the phenomenon and the comparison) I thought to proceed as following:
1. calculating (separately for each indicator) in which percentage the $I_1, I_2, I_3, I_4$ are present in the cases A, B, C. For example, in the case A: $I_1=70%$, $I_2=20%$, $I_3=10%$, $I_4=20%$.
2. considering that the indicators have not the same weight in the determination of the phenomenon, I have used the operation of "pondering" by giving to each indicator the correct weight and then multiply the "weight" for all indicators' values of each case. For example, $I_1= 3$; $I_2=1$; $I_3=1$; $I_4=5$. So, in the case A: $I_1=70*3$; $I_2=20*1.....$
3. at this point I though that it seemed right to do the operation of "normalization" in order to make possible the comparison of all the resulting of the three cases in a single bar graph. The operation that I have do is the following: *$I_1(A)-X_m$(possible minimum value of $I_1A$)$/XM$(possible maximum value of $I_1$A)$-X_m$ . In this case $X_1=$ each value that each indicator takes in the single cases.
4. after having gathered all the resulting data normalized, I have have summed them for each case ($I_1A+I_2A+I_3A+I_4A$) and then convert them in a bar graph where the y-axis represent the levels in which the phenomenon is present in the case, and the x-axis represent every case.
I would to know if, by doing the normalization, I can make comparable all the three cases in a graph which act like an "index" of the presence of the phenomenon or if I made something wrong.
Blockquote
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8482521772384644, "perplexity": 327.4610339732499}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701148428.26/warc/CC-MAIN-20160205193908-00024-ip-10-236-182-209.ec2.internal.warc.gz"}
|
http://link.springer.com/article/10.1007%2Fs12155-008-9023-9
|
BioEnergy Research
, Volume 1, Issue 3, pp 229–238
Assessment of Canopy Structure, Light Interception, and Light-use Efficiency of First Year Regrowth of Shrub Willow (Salix sp.)
• Timothy A. Volk
• Christopher A. Nowak
• Godfrey J. Ofezu
Article
DOI: 10.1007/s12155-008-9023-9
Tharakan, P.J., Volk, T.A., Nowak, C.A. et al. Bioenerg. Res. (2008) 1: 229. doi:10.1007/s12155-008-9023-9
Abstract
According to the light-use efficiency model, differential biomass production among willow varieties may be attributed either to differences in the amount of light intercepted, the efficiency with which the intercepted light is converted to aboveground biomass, or both. In this study, variation in aboveground biomass production (AGBP) was analyzed in relation to fraction of incoming radiation intercepted (IPARF) and light-use efficiency (LUE) for five willow varieties. The plants were grown in a short-rotation woody crop (SRWC) system and were in their first year of regrowth on a 5 year old root system. The study was conducted during a two-month period (June 15th–August 15th, 2001) when growing conditions were deemed most favorable. The objectives were: (1) to assess the relative importance of IPARF in explaining variation in AGBP, and (2) to identify the key drivers of variation in LUE from a suite of measured leaf and canopy-level traits. Aboveground biomass production varied nearly three-fold among genotypes (3.55–10.02 Mg ha−1), while LUE spanned a two-fold range (1.21–2.52 g MJ−1). At peak leaf area index (LAI), IPARF ranged from 66%–92%. Nonetheless, both IPARF and LUE contributed to AGBP. An additive model combining photosynthesis on leaf area basis (Aarea), leaf mass per unit area (LMA), and light extinction coefficient (k) produced the most compelling predictors of LUE. In a post-coppice willow crop, the ability to maximize IPARF and LUE early in the growing season is advantageous for maximizing biomass production.
Keywords
Aboveground biomass Canopy structure Light interception Light-use efficiency Willow (Salix sp.)
Abbreviation
Aarea
Light-saturated photosynthesis per unit leaf area μmol m−2 s−1
AGBP
Aboveground biomass production Mg ha−1
Amass
Light-saturated photosynthesis per unit leaf mass nmol g−1 s−1
Ic
IPARF
Fraction of incoming photosynthetically active radiation intercepted
IPART
Total incoming photosynthetically active radiation intercepted MJ m−2
IRGA
Infrared gas analyzer
k
Light extinction coefficient for Beer’s law
LAI
Leaf area index
LMA
Leaf mass per unit area g m−2
LUE
Light-use efficiency g MJ−1
Narea
Leaf nitrogen concentration expressed on a per unit area g m−2
Nmass
Leaf nitrogen concentration expressed on a per unit mass g kg−1
NOAA
PAR
Photosynthetically active radiation μmol m−2 s−1
PARA
Incident photosynthetically active radiation on top of the canopy μmol m−2 s−1
PARB
Photosynthetically active radiation below the canopy μmol m−2 s−1
PFD
Photon flux density μE m−2 s−1
SRWC
Short-rotation woody crop
SUNY-ESF
State University of New York, College of Environmental Science and Forestry
Introduction
The existence of a strong relationship between light interception and crop growth is well established in agricultural crops [14, 39], short-rotation woody crops (SRWC) [5] and forest trees [27, 29]. This relationship is commonly analyzed using the light-use efficiency model [4, 30], which comprises two conceptual components: fraction of incoming photosynthetically active radiation (PAR) that is intercepted (IPARF) and the efficiency with which the intercepted radiation is converted into biomass, called light-use efficiency (LUE (ɛ); grams of biomass produced per megajoule of intercepted photosynthetically active radiation). Enhanced productivity of SRWC have been linked to early canopy development leading to high levels of light interception, and high photosynthetic capacity resulting in high carbon gain rates [6]. Previous studies have reported extensive variation in biomass production potential among willow varieties in Europe [21, 26] and North America [18]. From the light-use efficiency model, it can be theorized that the variation in AGBP among the different varieties may be related to differences in IPARF, LUE, or both.
There is some debate regarding the importance of LUE as a determinant of genotypic variation in biomass production. For example, Cannell et al. [5] found similar LUEs in a 1-year old poplar genotype and a 1-year old willow genotype, grown in containers under favorable conditions that varied nearly two-fold in biomass production. [19] and [1] have reported similar conclusions that under non-limiting growing conditions, LUE may be constant for a single species. When LUE refers to total dry mass production, it is likely to be a conservative value for C3 crops growing under similar temperature and radiation environments, especially on longer time scales. [1, 5, 19] and [5] have further reported that large improvements in biomass production might be realized mostly by focusing on differences in light interception (leaf area duration) and amount of biomass allocated to aboveground tissue, rather than on aspects related to LUE.
Others contend that the rates of canopy photosynthesis and possibly LUE can be varied by manipulating canopy structure [9, 37]. A comparison of two native and three-hybrid poplar genotypes in a high-density plantation in their third year of growth showed that LUE varied nearly two-fold [11]. Biomass production was closely related to LUE and was unrelated to IPARF, which only varied by 5% among the genotypes. Studies conducted on conifers have shown that inter-species and intra-species differences in biomass production were linked to variation in both IPARF and LUE [27, 28]. While studies of light-use efficiency in shrub willows are scarce, Bullard et al. [3] outlined some effects of varying planting densities on light-use efficiency in willows for three growing seasons. Under optimal conditions, change in planting density from 10,000–111,000 plants ha−1 had a significant effect on LUE of Salix viminalis (1.55–2.55 g MJ−1) and Salix × dasyclados (1.34–1.84 g MJ−1) respectively.
Thus, overall analyses of light-use dynamics in SRWC, and in willows in particular, are far from conclusive. We examined the relative importance of IPARF and LUE in explaining variability in biomass production, and identified the key drivers of variation in IPARF and LUE from the measured leaf- and canopy-level attributes. Analysis of this kind will provide information for selection of useful varieties and design of management practices that will facilitate high biomass production in competitive environments [20].
Intercepted radiation is regulated by the amount and orientation of foliage, and the duration during which the foliage is deployed [4, 19]. Given the variation reported in leaf area index (LAI) and leaf demography in willow [6], it can be hypothesized that significant variation in IPARF can be found among willow varieties. Variation in LUE between genotypes growing in a favorable environment, on the other hand, may be attributed to variation in leaf photosynthetic capacity, which is related to a combination of leaf structural and biochemical properties [4, 29, 35]. For example, differences in leaf nitrogen and leaf mass per unit area (LMA) are known to influence variation in leaf photosynthetic capacity within a single species and among different species [13, 35]. Both of these traits have been shown to vary among willow varieties [33]. Thus, efforts to understand LUE should include an assessment of attributes related to photosynthesis and foliage morphology and biochemistry. Canopy structure is a major feature that influences light distribution over the foliage surface within the canopy [6, 9, 34]. An “ideal” canopy would “optimize” light environment by distributing radiation throughout the canopy in such a way that all leaves were exposed to intermediate, nearly saturating quantum flux densities. The optimization of light environment is aided by corresponding variability in canopy structure. Light extinction coefficient (k), a measure of light attenuation within canopies, decreases as leaf angles increase. Low k in mid-and-upper canopy regions result in gradual light attenuation and deeper penetration of light, especially at the high leaf area indices observed in high-density SRWC plantations [9, 12]. This would maximize the rate of photosynthesis per unit of light intercepted by each leaf. Varieties can also differ in leaf size and shape and their arrangement on the branches, and stem and branch architecture- measured in terms of canopy width, number of stems, and branching patterns [42]. These morphological differences affect the energy and gas exchange processes and the development of large leaf areas.
The present study was conducted on five willow varieties known to have different aboveground biomass production and stem and foliar morphological characteristics. The objectives of this study were two-tiered: (1) to assess the relative importance of IPARF and LUE in explaining variation in aboveground woody biomass production (AGBP), and (2) to determine which of the select leaf and canopy-level constitutive traits were the key drivers of variation in IPARF and LUE. We hypothesized that variation in AGBP would be more closely related to IPARF than to LUE in this system with a rapidly developing canopy and that any LUE variation we did observe would be closely related to traits that maximize canopy photosynthesis, including those that govern intra-canopy light distribution.
Materials and Methods
Study Design and Site Conditions
The study was conducted at SUNY-ESF’s existing genetic selection trial [41] established at Tully, New York (42 47′ 30″ N, 76 07′ 30W). The soil was a well-drained Palmyra gravelly silt loam (Glossoboric Hapludalf) [17]. The trial, which included 32 shrub willow varieties and eight hybrid poplar genotypes, was established in late April 1997 on approximately 0.4 ha, as a randomized complete block design with four blocks. Individual plots were planted with 48 willow cuttings of 0.6 m by 0.9 m spacing. Cuttings were hand planted with 25 cm long dormant unrooted cuttings, flush with the soil surface. The plants were coppiced in the winter of 1997, harvested in 2000 at the end of the first rotation, and began their second rotation in the spring of 2001. Details of the site preparation, plot establishment and maintenance are presented in Tharakan et al. [41]. In late May 2001, the trial was fertilized with sulfur-coated urea at 120 kg N ha−1.
The study focused on measurements taken during the 2001 growing season on five shrub willow varieties (Table 1). These varieties had diverse morphological and growth attributes in the first rotation [41], and hence it was hypothesized that they would differ in aspects related to radiation capture and use. The study was conducted over the period when growing conditions were deemed to be most optimal (June 15th –August 15th). This time period was chosen to avoid the difficulty of separating variation in growth rate resulting from differences in growth duration related to differences in phenology (i.e. time of leaf flush and abscission) [11]. During this period, precipitation at the site (240 mm) was comparable to the 30 year mean, while growing degree-days (base 10°C) exceeded the 30 year average by 15% [31]. The study was conducted under near-optimal growing conditions to minimize genotype by environment interactions in biomass productivity and its determinants [4, 7].
Table 1
List of willow varieties used in the light-use efficiency (LUE) analysis
Variety
Parentage
Origina
94012
Salix purpurea
NY, USA
Pur12
Salix purpurea
S566
Salix eriocephala (erio) 28×Salix eriocephala 24
SV1
SX61
Salix sachalinensis
Japan
aDenotes the place where the collections or crosses were made rather than the geographical or botanical origin. For example, S. purpurea was imported from Europe in colonial times and has since been naturalized to Ontario, Canada and the Northeastern U.S.A
Aboveground Biomass Production (AGBP)
Allometric relationships relating stem diameter to stem dry weight, and stem diameter to foliage mass, were developed to estimate the net biomass gain of aboveground tissues. In early August, 30 stems representing the entire diameter range of each variety were selected from across the four replications and their diameters were recorded at a height of 5 cm from the ground prior to harvesting them. Post-harvest stem and foliage were separated and bagged. They were then dried at 65 °C to a constant weight. Variety-specific relations between stem biomass and diameter (y = axib+ei; r2 = 0.94–0.99) were subsequently used to estimate biomass gain for the center four stems in each plot. Similarly variety-specific relations between foliar biomass and diameter (y =axib+ei; r2 = 0.86–0.96) were used to estimate foliage biomass for the center four stems in each plot. Finally, measurements were averaged across the four individuals in each plot and converted to an area basis and summed to estimate AGBP (g m−2).
Canopy Structure, Light Interception, and Light Use-Efficiency
Leaf area index (LAI; projected leaf area and branches per unit ground area) was measured on a weekly basis during the study period using a LAI 2000 plant canopy analyzer (LI-COR Inc. Lincoln, NE, USA) [8]. A measurement cycle consisted of a reference measurement in a clearing, away from the canopy. This was followed by eight below-canopy readings and another reference measurement. The fish-eye lens of the instrument was covered with a view cap with a 45° opening to ensure that the measurements were not influenced by the surrounding plots or by the operator (LI-COR Inc. Lincoln, Nebraska). All measurements were taken at 0.5 m aboveground, either early in the morning or late afternoon to allow for totally diffuse light conditions, on cloudless or uniformly overcast days. To estimate canopy averages for leaf mass per unit area (LMA, g m−2), 15 undamaged leaves were selected and harvested from throughout the canopy in each plot in early August. Following leaf area estimation using a LI-COR 3100 leaf area machine [25], the harvested leaves were dried to constant weight at 65 °C. Leaf mass per unit area (LMA) was calculated as leaf dry mass/area. The leaf samples were then ground and total nitrogen (foliar N) was estimated by acid-base volumetry after Kjeldahl mineralization [2]. Nitrogen concentrations were calculated both on mass (Nmass, g kg−1) and area basis (Narea, g m−2).
In mid-July, canopy averages for leaf angle were estimated in all the plots based on measurements taken at increments of 1 m from the base to the top of the canopies using a protractor inclinometer [32]. Measurements for S566 were only taken at two canopy positions because the canopy depth was limited. The midrib angle in relation to the horizontal was measured on 10 leaves per increment [9]. Leaf angles were then averaged across height increments in each plot. Subsequently, canopy widths were measured in each increment by taking two measurements in the N-S and E-W direction with a meter stick and averaging them. Measurements were then averaged across height increments in each plot.
The total amount of photosynthetically active radiation intercepted by the canopy during the study period (IPART, MJ m−2) was estimated in each stand by using the equation:
$${\text{IPAR}}_{\text{T}} = \,{\text{Total}}\,{\text{PAR x}}\,{\text{IPAR}}_{\text{F}}$$
(1)
where, IPARF is the fraction of incident PAR intercepted by the canopy. Total PAR for the study period was measured on site (approximately 75 m from the trial) using a LI-190SA quantum sensor [23] that was attached to a data logger. Data was logged at 1 min intervals and averaged every 10 min. Fraction of incoming photosynthetic active radiation (IPARF) was calculated as:
$${\text{IPAR}}_{\text{F}} \, = \,1 - \left( {{{{\text{PAR}}_{\text{B}} } \mathord{\left/{\vphantom {{{\text{PAR}}_{\text{B}} } {{\text{PAR}}_{\text{A}} }}} \right.\kern-\nulldelimiterspace} {{\text{PAR}}_{\text{A}} }}} \right)$$
(2)
where, PARB was the PAR measured below the canopy and PARA was the incident PAR on top of the canopy. Both measurements were taken simultaneously at weekly intervals between 10.00–12.00 h on uniformly sunny or cloudy days. Photosynthetic active radiation above the canopy (PARA) was measured using a LI-190SA quantum sensor [23] that was positioned in a clearing away from all canopy interference. Photosynthetic active radiation below the canopy (PARB) was measured using a LI-191SA line quantum sensor that was calibrated to the LI-190SA quantum sensor. Measurements were taken at three points in each plot along a diagonal transect. At each sampling point, two measurements were taken by orienting the sensor first in the N–S direction and then along the E–W direction. All measurements were taken at approximately 10 cm from the ground. Weekly measurements of PARA and PARB were then used to calculate weekly values for IPARF and IPART. Weekly values of IPART were summed to yield IPART for the study period. The LUE (g MJ−1) of each plot was calculated by taking the quotient of AGBP and IPART for the study period. The canopy light extinction coefficient (k) for Beer’s law was estimated using the relationship between IPARF and LAI [30].
$${\text{IPAR}}_{\text{F}} \, = \,1 - e\,^{ - k\,x\,{\text{LAI}}}$$
(3)
where, k was determined from the regression of log (1- IPARF) against LAI. Canopy width and k were used as basis for comparisons of intracanopy light distribution.
Photosynthesis Measurements
Net photosynthesis was measured once in late July using a LI-COR 6200 photosynthetic system [24] equipped with a 0.25 L chamber. Readings were corrected for leaf area whenever the chamber was not completely filled. The infrared gas analyzer (IRGA) was calibrated daily and checked periodically throughout the day. All observations were made on days with bright, uniform conditions, between 9.30–13.00 h, to eliminate the possibility of any late-afternoon decline of photosynthetic capacity. Measurements were taken on two healthy, mature leaves, selected from the top third of the canopy of two plants per plot [11]. Leaf orientation was maintained while being enclosed in the chamber and all observations were made when the photon flux density (PFD) was above 800 μE m−2 s−1, which is saturating for many willow varieties [38]. We restricted measurements to ambient CO2 concentrations (330–360 μl l−1) and humidity levels (35–60%). Leaves were enclosed in the chamber for less than 90 s to prevent excessive rise in leaf temperature [24]. After the light-saturated photosynthetic rate (Aarea, μmol m−2 s−1) was measured, each leaf was collected, sealed in a bag and kept out of the sun until it was brought back to the lab to measure leaf area using a LI-COR 3100 leaf area meter [25]. The leaves were then dried to constant mass at 65 °C and weighed to determine LMA, which was used to calculate light-saturated photosynthesis per unit leaf mass (Amass, nmol g−1 s−1).
Statistical Analyses
Univariate analysis of variance (ANOVA), conducted using PROC GLM [36], was used to assess differences in AGBP, peak IPARF, LUE and all leaf and canopy traits among clones. Tukey’s mean studentized range test was used to determine significant mean separations among varieties at a critical level of ∝ = 0.05. Relationships among AGBP, peak IPARF, LUE and canopy traits were assessed by regression analysis conducted using Statistica version 6 [40]. Scatter plots of the variables and residual plots were used to identify suitable models. Except relationships between AGBP and IPARF, and between LAI and IPARF, which were curvilinear and hence described using a polynomial regression model, all other relationships were characterized using simple linear regression. The plots were the experimental units in all regressions. Step-wise multivariate regression was used to examine the relationships between LUE and all combinations of measured canopy traits.
Results
Biomass Production, Light-use Efficiency, and Leaf and Canopy Traits
Estimates of accumulated AGBP during the two months varied nearly three-fold (3.55–10.02 Mg ha−1; Table 2). Among the varieties, Pur12 and SX61 had the highest AGBP while 94012 and S566 were grouped at the lower end (Table 2; Fig. 1). Salix dasyclados (SV1) had intermediate AGBP. Light-use efficiency during the same period spanned a two-fold range (1.21–2.52 g MJ−1; Table 2). Again, Pur12 and SX61 had the highest values. As expected in a rapidly developing canopy, LAI varied throughout the study period with SX61, SV1 and Pur12 having the highest LAI in mid August (Fig. 2). Peak LAI in mid-August ranged from 2.74–3.74 (Table 2). The proportion of radiation intercepted also varied from mid-June to mid-August (Fig. 2). At peak LAI, IPARF ranged from 66.9–92.4% (Table 2).
Table 2
Mean net gain in aboveground biomass production (AGBP), light-use efficiency (LUE), peak light interception (IPARF), and select foliage and canopy characteristics
Variety
SX61
Pur12
SV1
94012
S566
Variablea
AGBP (Mg ha−1)b
10.02a
9.74a
7.74b
4.85c
3.55c
LUE (g MJ−1)
2.40a
2.52a
1.70b
1.43c
1.21c
LAI
3.73a
3.55a
3.73a
2.85b
2.74b
IPARF (%)
91.80a
91.20a
92.40a
76.50b
66.90b
LMA (g m−2)
80.60a
81.70a
69.10c
72.80bc
75.40b
Leaf angle (°)
−28.00
25.00
14.00
40.00
38.00
k
0.89
0.88
0.95
0.68
0.78
Crown width (cm)
114.38
109.13
113.69
64.17
87.82
Amass (nmol g−1s−1)
131.90c
197.20a
178.80b
139.50c
111.10d
Aarea (μmol m−2 s−1)
11.40b
14.90a
11.90b
11.10b
8.30c
Nmass (g kg−1)
24.70b
32.30a
25.80b
23.50b
20.90b
Narea (g m−2)
1.92
2.58
1.86
2.06
1.65
aAbbreviations are: IPARF-fraction of incoming photosynthetically active radiation (PAR) intercepted by the canopy at peak LAI; LAI-projected leaf area index on per unit area. The values presented here are peak values measured during the study; LMA-canopy average for leaf mass per unit area; Nmass-leaf nitrogen concentration expressed on a per unit mass; Leaf angle-angle of the leaf mid-rib from the horizontal; k-light extinction coefficient; Aarea-and Amass-light-saturated photosynthesis per unit leaf area and leaf mass in the upper canopy respectively
bMean values for a given variable followed by the same letter do not differ significantly at α = 0.05, according to Tukey’s studentized range test
Canopy averages for LMA were greatest in Pur12 (81.7 g m−2) and lowest in SV1 (69.1 g m−2) (Table 2). The differences in other canopy structural traits were also evident. Average leaf angle generally increased with canopy height (Fig. 3). Average leaf angle ranged from nearly planophile in SV1 (14°) and SX61 (−28°, drooping leaves), to more plagiophile in Pur12 (25°), S566 (38°) and 94012 (40°). Correspondingly, k varied from 0.68–0.95 (Table 2). Crown width also varied considerably, with 94012 and SX61 having the smallest value (64.17 cm), and the largest value (114.38 cm), respectively. Light-saturated photosynthesis both on leaf mass (Amass) and leaf area (Aarea) basis, showed similar patterns with highest value in Pur12 and lowest value in S566 (Table 2). Light-saturated photosynthesis ranged from 111.1–197.2 nmol g−1 s−1 by leaf mass and 8.3–14.9 μmol m−2 s−1 by leaf area. Canopy Nmass (20.90–32.30 g kg−1) and Narea (1.65–2.58 g m−2) also varied significantly among the varieties.
Determinants of AGBP, IPARF, and LUE
Across genotypes, AGBP was strongly and positively related to peak IPARF and LUE (Table 3; Fig. 1). Aboveground biomass production showed a quadratic relationship with IPARF, which in turn was positively related to LAI, crown width, and k. At >90% light interception, three varieties (SX61, Pur12, and SV1) exhibited increased aboveground biomass (>7 Mg ha−1). While the quadratic relationship with LAI and the linear relationship with k were strong, the linear relationship with crown width was weak, albeit significant. Regressing IPARF with combinations of the variables LAI, k, and crown width did not significantly enhance the model fit.
Table 3
Results of regression analyses between aboveground biomass production (AGBP), light interception (IPARF), light-use efficiency (LUE), and measured foliage and canopy traits (n = 20)
Variable ya
Variable x
r2
P
A
B
cb
AGBP
IPARF
0.80
<0.01
3199.50
−88.60
0.69
LUE
0.85
<0.01
−139.60
462.90
Ns
IPARF
k
0.75
<0.01
34.30
0.87
Ns
Crown width
0.41
<0.01
52.00
0.33
Ns
LAI
0.87
<0.01
−191.90
148.20
−19.20
LUE
k
0.28
0.02
−0.62
2.96
Ns
Aarea
0.62
<0.01
−0.43
0.20
Ns
Amass
0.31
0.01
0.41
0.01
Ns
LMA
0.52
<0.01
−5.92
0.06
Ns
Nmass
0.35
<0.01
−0.04
0.75
Ns
Crown width
0.41
<0.01
0.36
0.02
Ns
aVariable definitions are the same as in the Table 2
bThe relationship between the variables were either linear (y=a+bx) or quadratic (y=a+bx+cx2)
Light use efficiency was positively related to photosynthesis expressed on both leaf area (Aarea) and leaf mass (Amass) basis, with the relationship with Aarea being the stronger of the two (Table 3). Light-use efficiency was also positively related to canopy averages of foliar nitrogen concentrations expressed on mass-basis. In terms of canopy characteristics, LUE was positively related to k, canopy averages of LMA, Aarea, Amass, and crown width. Variable selection indicated that LUE was most strongly related to the additive combination of Aarea, Amass, and LMA, which explained 86% of the variation (p < 0.01). All the variables showed a positive influence and the contribution of each of the variables was highly significant (p < 0.05; LUE=−4.286 + 0.316(Aarea) + 0.036(LMA)−0.013(Amass).
Discussion
We assessed the effects of IPARF and LUE on the AGBP and several constitutive leaf- and canopy-level attributes. The findings reveal that IPARF is a strong determinant of AGBP in willow varieties in the first year of growth after harvest when the canopy is developing rapidly. High LUE was associated with high leaf photosynthetic potential (high Aarea and LMA), rather than with aspects related to intra-canopy light distribution (canopy width) and maximum light interception capacity (k).
Varietal Differences
The mean values for LUE measured in this study (1.21–2.52 g MJ−1) were similar to some published values obtained for willow. For instance Cannell et al. [5] obtained LUE value of 1.58 g MJ−1 for Salix sp. in their first year growth. Similarly LUE values for Salix viminalis and Salix × dasyclados ranged from 1.55–2.55 g MJ−1 and 1.34–1.84 g MJ−1 respectively [3]. The large variability in LUE observed mirrors the large variation reported for Populus spp [11] and for willow [3]. Cannell et al. [5] have suggested that significant differences in LUE and hence biomass allocation can be attributed to differences in carbon allocation. Since belowground biomass accumulation was not assessed in this study, it is uncertain to what extent the variation in LUE was attributed to differences in carbon allocation between aboveground and belowground components. Given the non-stressed growing conditions (i.e., fertilization, average precipitation, and near optimal temperatures), it can be presumed that the genotypes approached their full productive potential and adopted favorable allocation patterns [11, 20]. Foliar nitrogen concentration (Nmass) measured in this study (20.90–32.30 mg g−1) was similar to the normal to optimal range reported in Kopinga and van der Burg [16]. Thus, the observed behavior of these varieties may be typical for high-density plantings on productive sites, and the differences in LUE among these varieties may indicate inherent genetic variation arising from differential adaptation to such conditions.
Determinants of AGBP, IPARF, and LUE
The amount of radiation intercepted was most strongly related to the amount of foliage (LAI), but also was related to k, and crown width. In high-density willow plantings, the influence of canopy architecture on the mean fractional canopy interception is usually much less important than the LAI [6]. An additive model combining Aarea and LMA produced the best predictors of LUE, supporting the hypothesis that a suite of photosynthesis and light distribution traits would be closely related to LUE. Leaf mass per unit area relates to leaf thickness and density [10]. Leaves with high LMA often develop in high light and are associated with high mesophyll cell density. Studies have shown that sunlit canopy leaves tend to maximize photosynthetic capacity by combining high LMA with high Nmass to maximize Narea (Narea = Nmass * LMA), which is closely correlated with Aarea in leaves within plant canopies [10, 13]. Given the positive association between LUE and both Aarea and LMA in these willow varieties, it can be said that LUE was enhanced by intrinsically high photosynthetic rates combined with leaf structural traits that were adaptive under “high light” conditions.
The positive relation between k and LUE seen in this study is contrary to the negative relationship that has been observed in high-density plantations of other tree species and agricultural crops [11, 22, 29]. In dense canopies with high LAI, crops with erect leaves (high leaf angles and low k values) are considered to have a considerable yield advantage over those with horizontal leaves [9, 11]. Steep leaf angles in the uppermost canopy leaves followed by a gradual decrease in leaf angles along the vertical profile facilitates gradual light attenuation and better light distribution within the canopy, thereby maximizing canopy carbon gain [13]. However, orientation of foliage assumes less importance in sparse canopies that result from small-sized leaves or low LAI.
A coppiced canopy such as the one in this study can be considered growing in an open, “high light” condition with limited self-shading [11]. The potential for self-shading is further limited in a developing willow stand owing to small individual leaf size [41], the relatively low leaf area on each of the multiple stems per willow plant, and the relatively large crown width resulting from its multiple stem habits. Under such conditions, deeper penetration of light is achieved within the canopy even at the high k values seen in this study (0.5–0.7). In fact, high k at low levels of canopy competition maximizes light interception and thus the potential for energy harvesting [11, 13]. An analysis of the spatial variation in LMA provides some evidence of the “high light” environment prevailing in these willow canopies. In contrast to the spatial pattern of decreasing LMA from top to bottom in canopies characterized by strong light gradients [10] the varieties in this study showed very little variation in LMA through the depth of the canopy). The high positive association between LUE and LMA (Table 3), which is considered adaptive only under “high light” conditions further supports this explanation.
In a post-coppice context, the ability to rapidly maximize both IPARF and LUE in shrub willow is highly advantageous for maximizing biomass production. While this finding is contrary to the suggestions by Cannell et al. [5], it is in line with other studies, which reported similar variation in LUE among varieties and species [3, 11, 15]. However, it must be emphasized that the LAI were only moderately high, and severe competition had not developed. In a highly competitive environment, the relative importance of IPARF and LUE may shift in the direction of LUE as has been reported in older poplar plantations [11]. It should be noted that the gas exchange measurements in this study represent a point in time and may not be indicative of the entire growing season, which could vary due to seasonal variation in incident radiation. Nonetheless, the significant variation in photosynthetic rates seen among the clones is noteworthy. Even small differences in leaf-level photosynthesis of willow varieties when integrated over the entire canopy over time, can result in substantial differences in seasonal carbon gain [9].
This study corroborates the theory that at low levels of intra-canopy competition, higher light interception capacity, rather than intra-canopy light distributions are more important for maximizing LUE [22, 34]. Further research is needed to determine whether this trend is specific to willow due to its aforementioned canopy and foliar characteristics, or whether this is an artifact of the light environment in a young canopy.
Implications for Willow Breeding and Management
The variability in traits may provide useful information for breeding, selection and management. From our analysis, a large amount of photosynthetically active radiation was intercepted in mid-July at an optimal leaf area index for each variety (Fig. 2). Above this optimal point, further increase in leaf area index does not noticeably increase the energy intercepted. In addition, maximum radiation interception was significantly correlated with maximum biomass accumulation. Optimization of LAI would therefore increase light interception but further increase in LAI will negatively contribute to light interception. The analysis further demonstrates that breeding varieties with larger leaf surface area and leaf mass per unit area would maximize LUE. In these cases, the importance of maintaining large leaf area must be associated with an efficient utilization of the intercepted radiation, rather than the maximization of the interception of the radiation. High leaf area or light interception is not necessarily a good indicator of a plant’s ultimate potential following canopy closure. Traits that maximize the photosynthetic rates must balance losses from respiration.
While the results presented here highlight the adaptive value of a specific combination of traits, it must be borne in mind that alternative combinations of foliage amount, orientation, and aspects related to gas exchange may be adaptive to other environments and cultural conditions. Light-use efficiency is very sensitive to environmental variables that affect photosynthesis and balance with respiration (such as light, temperature, soil-water and humidity) [4]. In areas that have frequent moisture stress, high leaf area and high k during the growing season may be less advantageous since the condition enhances energy loading resulting into excess leaf temperature, water loss, respiratory loss, and photo-inhibition, all of which adversely affect LUE [19]. In such environments, the combination of high Nmass and LMA, and low k are more successful [9, 42]. In addition willow varieties with small leaves and numerous upright branches such as the S. purpurea have advantages in such environments. Smaller leaves are more efficient at regulating heat balance through convective cooling than larger leaves, which depend more on transpirative cooling [42].
Over the rotation period, as canopy density increases and variable stress environments are experienced, the ability to maximize LUE is a function of the extent of plasticity that might exist in the traits discussed above. Will et al. [43] reported that in Populus taeda and Populus elliottii plantations, LMA decreased with stocking density, and this modification in leaf morphology was the main mechanism by which trees in the different density treatments improved canopy light interception. The degree of plasticity exhibited will determine the optimal planting density range, the ultimate rotation length, and also the kind of sites on which a particular clone can be expected to perform well. For instance, if traits such as leaf angle and LMA are relatively fixed within a clone, then the associated optimal growing conditions, including density range, may be quite narrow for each clone. Conversely, if key traits are more plastic in some clones, then their planting density range, rotation length, and planting sites range would correspondingly be greater [11]. Information on trait plasticity in shrub willow is meager and inconclusive. Bullard et al. [3] reported a significant increase in LUE with increased planting density in two willow varieties. However, large sample variation precluded them from reporting any significant trend in corresponding k values.
Conclusion
Variation in biomass productivity among willow varieties in a post-coppice context was related to both the amount of light intercepted and the efficiency with which the intercepted light was converted to dry matter. Light-use efficiency, in turn, was most strongly related to “high light” interception capacity coupled with high intrinsic photosynthetic potential and foliar structure that was conducive to maximum carbon gain in “high light” conditions. The relative importance of these traits may vary under alternative environmental conditions, density, and competition regimes. Future studies with shrub willows should aim to examine genotype trait plasticity and the resulting LUE across diverse growing environments, stocking densities, and over a complete rotation.
Acknowledgements
The authors are grateful to the Biomass Feedstock Development Program of the US Department of Energy under contract DE–AC05–00OR22725 with the University of Tennessee-Battelle LLC (Subcontract number 19X–SW561C), USDA CSREES, and the New York State Energy and Research Development Authority (NYSERDA) for funding this study. Special appreciation is extended to C. Dattler and A. Millar for assistance in the field and the laboratory.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9089454412460327, "perplexity": 6475.473684852503}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120206.98/warc/CC-MAIN-20170423031200-00191-ip-10-145-167-34.ec2.internal.warc.gz"}
|
http://tug.org/pipermail/texhax/2007-May/008425.html
|
# [texhax] Problem with column width
Samuel Lelievre samuel.lelievre.tex at free.fr
Mon May 21 22:13:20 CEST 2007
Kenneth Cabrera wrote:
> Hi TeX users:
>
> Why do I obtain a table with the last column wider than the rest?
> Thank you for your help.
>
> I am attaching the file code.
>
> Thank you for your help.
>
> --
> Kenneth Roy Cabrera Torres
> Cel 315 504 9339
Hi Kenneth,
but since there is a multicolumn on the last 4 columns, the last
of them is expanded to match the length needed for the entry in
multicolumn.
If you make the multicolumn entry shorter (eg by splitting it in
two lines), this problem will disappear. (cf. code below).
I don't know a way to have the extra space spread evenly among
the last four columns (but I'd be interested in knowing!).
Apart from that, you might be interested in the package booktabs
for professional-looking rules (horizontal lines) in tables.
The package longtable is also useful when your tables need to be
allowed to spread over a page break (I use it by default for any
table that stands on its own, i.e. not included in a paragraph).
Best,
Samuel
----- one way to avoid extra spacing in last column -----
\documentclass{article}
\usepackage[spanish]{babel}
\usepackage[latin1]{inputenc}
\usepackage{graphicx}
\usepackage{amsmath,amssymb,latexsym,amsthm}
\usepackage{fancyvrb}
\usepackage[margin=2cm]{geometry}
\setlength{\parindent}{0pt}
\setlength{\parskip}{1.5ex plus 0.5ex minus 0.2ex}
\begin{document}
\begin{table}[ht]
\begin{center}
\caption{Tabla X}\label{tb}
\vspace{2ex}
\begin{tabular}{ccccccc}
\hline
\multicolumn{3}{c}{} &
\multicolumn{4}{c}{N\'umero de viajes} \\
Tipo & Capacidad & N\'umero &
\multicolumn{4}{c}{diarios en la ruta} \\
\cline{4-7}
de avi\'on & (pasajeros) & de aviones & 1 & 2 & 3 & 4 \\
\hline
1 & 50 & 5 & 3 & 2 & 2 & 1 \\
2 & 30 & 8 & 4 & 3 & 3 & 2 \\
3 & 20 & 10 & 5 & 5 & 4 & 2 \\
\hline
\multicolumn{3}{l}{N\'umero de clientes diarios} &
1000 & 2000 & 900 & 1200 \\
\hline
\end{tabular}
\end{center}
\end{table}
\end{document}
----- end of modified example file -----
Kenneth Cabrera wrote:
>Hi TeX users:
>
>Why do I obtain a table with the last column wider than the rest?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9102591276168823, "perplexity": 1633.681066226393}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824733.32/warc/CC-MAIN-20171021095939-20171021115939-00131.warc.gz"}
|
https://www.physicsforums.com/threads/is-this-all-the-evidence-for-quarks.303164/
|
# Is this all the evidence for quarks?
1. Mar 28, 2009
### dangerbird
1-electron scattering
2-collider data
3- ? is there anything else which supports the quark model or is it just those 2. from what ive read so far its just those 2 but could be incorrect so correct me if my primative understandings off
THANKS
2. Mar 28, 2009
### malawi_glenn
Deep inelastisc scattering have used more probes than just electrons. One has used neutrinos and muons too for instance. (just deep inelastic scattering is enough to say that quarks exists)
Just saying "Collider data" is quite non precise - there are a lot of different data, which measures different things, and also many different collider experiment. It is like saying that the only evidence we have for Z bosons are collider data.
I would say the existence of the top quark as the strongest proofs of them all to call for the existence of quarks. The top quark does not hadronize, and the signal for top quarks is really interesting.
So you should be beware of that these proves are really good and are quite astonishing, and also some of these explorations have been awarded Nobel Prize.
Also when someone says "from what ive read", WHAT have you read? maybe there was a misunderstanding? etc.
3. Mar 28, 2009
### dangerbird
so basically anything that can enter the nucleus + can be measured when it deflects has been used then.... and the results of the paterns of deflections suggest theres 3 quarks per hadrun? that seems rather clever
4. Mar 28, 2009
### malawi_glenn
oh, no, there are more than 3 quarks in a proton, there are sea quarks as well. I just gave that answer to you in the thread you created. https://www.physicsforums.com/showthread.php?t=302385
I think you need to calm down, and get a good book on particle physics, and try to study it. You have again some misconceptions. First of all, there are hadrons which are made up on 2 valence quarks (one quark and one anti-quark).
5. Mar 28, 2009
### clem
The quark model was popular for five years before the first DIS experiments. This is because
3- A large number of the static properties (mass, spin, charge, muliplicity, magnetic moment)
of hadrons were correlated and predicted by assuming that baryons are composed of three quarks, and mesons of a quark-antiquark pair.
6. Mar 28, 2009
### humanino
Quite frankly to me the original question is not far from "what evidence do we have for quarks apart from physics ?". If you are clever enough to devise another experiment, please go ahead. But be aware that those two points you mention actually cover many different experiments. If only in DIS, historical measurements were done inclusively, by detecting only the scattered lepton. Nowadays we perform semi-inclusive measurements, where another particle at least is detected, and exclusive measurements, where all particles in the final state are detected. We find that all those phenomena are described by the same universal "wave functions" in terms of quark-gluon degrees of freedom.
As for collider data, again there are many different observations. You may collide two leptons or two hadrons for instance. If you collide two hadrons, you have theorems to deal with phenomena at high transverse momenta, or you have lepton pair productions (Drell-Yann). In fact it is very difficult to make an exhaustive list.
There is no definitive evidence that anything goes wrong with the partonic picture. On the contrary, the more we try to apply it to new situation, the more confident we become that our understanding is correct.
7. Mar 28, 2009
### Dmitry67
Hadronize???
Could you give more details or where to look?
8. Mar 28, 2009
### malawi_glenn
what? you don't know what hadronization is or what the top-quark signal looks like?
9. Mar 28, 2009
### Dmitry67
Yes, I am very stupid
So you claim that t quark do not form any bound systems with other quarks or what?
10. Mar 28, 2009
### malawi_glenn
no you are not stupid, I asked what you was asking for.
No, the top quark does not form any bound states, it is too short-lived, it decays before reaching out of the "perturbative scale".
11. Mar 28, 2009
### Dmitry67
I see... And what is so special about the signal?
BTW I heard that there is a Higgs-less model where t-anti t pairs play the role of Higgs, does it make sense?
12. Mar 28, 2009
### malawi_glenn
the signal is that you will have a b-quark etc which will give a invariant mass peak of about 170GeV for a fermion.
I have not heard of it, so I let someone else answer about that model.
13. Mar 28, 2009
### humanino
It depends which model exactly you are talking about. Gribov tried to do this, but he passed away before he could convince the community, and now his specific views seem out of fashion. But in any case, people still think around those kind of ideas and we should have more to test with LHC.
Electroweak symmetry breaking: to Higgs or not to Higgs
14. Mar 28, 2009
### Staff: Mentor
When talking about this time period, it's important to distinguish between quarks, the entities postulated by Gell-Mann and others, to account for the patterns of properties among hadrons, and partons, the hard point-like entities inside nucleons that were postulated by Feynman and others to account for the results of deep inelastic scattering experiments.
In the 1970s, it was by no means certain that quarks and partons were the same thing. One of the main goals of deep inelastic scattering experiments of the time (including the neutrino experiments that I worked on as a graduate student), was to test what was then called the "quark-parton model," which is now a basic part of the "standard model." One of the professors in my research group warned me not to get too attached to the quark-parton model, because it might turn out to be wrong.
15. Mar 29, 2009
### granpa
16. Mar 29, 2009
### dangerbird
alright, but now what im mainly wondering is how the DIS experiments support the quark model. i dont know how by shooting particle at the nucleus that it can differenciated that theres 3 quarks in a hadrun vs there being 9999. just by the paths of the deflections?
Last edited: Mar 29, 2009
17. Mar 29, 2009
### humanino
This very same question you asked just a few hours earlier on this forum. It might be more constructive if you do not ignore the answers you were already given.
It so happens that neutrinos respond differently to matter and anti-matter. So counting the difference between matter and antimatter inside a proton can be done by comparing the scattering of neutrinos and antineutrinos. It is by no means simple. But the observations agree with the theory.
18. Mar 29, 2009
### Vanadium 50
Staff Emeritus
A few comments:
As pointed out, there are a great many DIS experiments (including some at colliders - ZEUS and H1) and they clearly indicate the presence of three valance quarks - one type with charge +2/3 and the other with -1/3. It is, however, difficult to explain the details of how this can be extracted before someone understands (and by "understands", I mean "can calculate") the basics of DIS. Oh, and it's deeply inelastic scattering.
While baryon magnetic moments are often touted as a success of the quark model, it's not the best example. Any theory that has SU(3) flavor symmetry will make the same predictions as the quark model. So while it's evidence in favor of the quark model (and evidence against alternatives), it's not as compelling as it's usually advertised.
It's believed true that the top quark decays before it hadronizes, so one actually does observe a bare quark. However, there's no experimental evidence of this at the moment. One would need to study the angular correlations between polarized top quark pairs, and there just aren't enough of them out there to make a convincing measurement. We just have to wait.
I have no idea what granpa is talking about with the Delta. It's a hadron, to be sure, and it's therefore made of quarks, but it was not a particularly important stepping stone on the road to the quark model. The better example was the Omega-minus baryon, which was a state predicted by the quark model and (at the time) was undiscovered. Nick Samios and collaborators looked for it, and discovered it with exactly the predicted properties.
A powerful case for quarks is, in my mind, the energy levels of quarkonium - bound states of a heavy quark and an antiquark. These have energy levels similar to that of a hydrogen atom, and as such illustrate the dynamics of quark behavior. These measurements show that there are actual physical objects with the quark quantum numbers moving around inside the hadron.
19. Mar 29, 2009
### dangerbird
no i didnt
thats impossible neutrinos go through protons
20. Mar 29, 2009
### Vanadium 50
Staff Emeritus
So, JTBell, who worked on exactly these experiments (see above) is wrong? Or lying?
What makes you think that you know better than someone who actually did the experiment?
21. Mar 29, 2009
### dangerbird
Im just gona go out on a limb here, maybe neutrinos just naturally move arround like that and it has nothing to do with it bouncing off of some nucleus or proton? theres many possabilities
plus my IQ is 129
22. Mar 30, 2009
### Staff: Mentor
Going out on a limb is OK for professionals who are familiar with the field and who know what others have done before them. For others it's arrogance.
All theories and models are subject to being superseded by something better, but it happens only after solid experimental evidence or testable theoretical considerations, not some random musings about "many possibilities."
23. Mar 30, 2009
### Vanadium 50
Staff Emeritus
I agree with the sentiment, but might drop the word "professionals". The key is understanding what's gone before. If an amateur has taken the time and expended the effort to understand that, it's possible that their criticism is valid. Evidence is the key here, and I think the key is whether or not one can recognize it.
Of course, "you're wrong because I have some random musings about other possibilities - and a high IQ" is a non-starter.
24. Mar 30, 2009
### Staff: Mentor
Yes, I agree. I didn't intend to exclude serioius amateurs who have knowledge equivalent to a "real physicist."
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.843756377696991, "perplexity": 1284.3457025754442}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592861.86/warc/CC-MAIN-20180721223206-20180722003206-00475.warc.gz"}
|
https://crad.ict.ac.cn/CN/10.7544/issn1000-1239.2017.20151123
|
ISSN 1000-1239 CN 11-1777/TP
• 软件技术 •
### 云计算中资源延迟感知的实时任务调度方法
1. (国防科学技术大学信息系统工程重点实验室 长沙 410073) ([email protected])
• 出版日期: 2017-02-01
• 基金资助:
国家自然科学基金项目(61572511, 71271213);国防科学技术大学科研计划项目(ZK16-03-09)
### Resource-Delay-Aware Scheduling for Real-Time Tasks in Clouds
Chen Huangke, Zhu Jianghan, Zhu Xiaomin, Ma Manhao, Zhang Zhenshi
1. (Science and Technology on Information Systems Engineering Laboratory, National University of Defense Technology, Changsha 410073)
• Online: 2017-02-01
Abstract: Green cloud computing has become a central issue, and dynamical consolidation of virtual machines (VMs) and turning off the idle hosts show promising ways to reduce the energy consumption for cloud data centers. When the workload of the cloud platform increases rapidly, more hosts will be started on and more VMs will be deployed to provide more available resources. However, the time overheads of turning on hosts and starting VMs will delay the start time of tasks, which may violate the deadlines of real-time tasks. To address this issue, three novel startup-time-aware policies are developed to mitigate the impact of machine startup time on timing requirements of real-time tasks. Based on the startup-time-aware policies, we propose an algorithm called STARS to schedule real-time tasks and resources, such making a good trade-off between the schedulibility of real-time tasks and energy saving. Lastly, we conduct simulation experiments to compare STARS with two existing algorithms in the context of Google's workload trace, and the experimental results show that STARS outperforms those algorithms with respect to guarantee ratio, energy saving and resource utilization.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48608794808387756, "perplexity": 4134.178301021863}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304309.59/warc/CC-MAIN-20220123202547-20220123232547-00145.warc.gz"}
|
http://hal.in2p3.fr/in2p3-00585107
|
# Searches for Cosmic-Ray Electron Anisotropies with the Fermi Large Area Telescope
Abstract : The Large Area Telescope on board the \textit{Fermi} satellite (\textit{Fermi}-LAT) detected more than 1.6 million cosmic-ray electrons/positrons with energies above 60 GeV during its first year of operation. The arrival directions of these events were searched for anisotropies of angular scale extending from $\sim$ 10 $^\circ$ up to 90$^\circ$, and of minimum energy extending from 60 GeV up to 480 GeV. Two independent techniques were used to search for anisotropies, both resulting in null results. Upper limits on the degree of the anisotropy were set that depended on the analyzed energy range and on the anisotropy's angular scale. The upper limits for a dipole anisotropy ranged from $\sim0.5%$ to $\sim10%$.
Document type :
Journal articles
Domain :
http://hal.in2p3.fr/in2p3-00585107
Contributor : Virginie Mas <>
Submitted on : Monday, April 11, 2011 - 5:08:54 PM
Last modification on : Saturday, April 11, 2020 - 2:06:13 AM
### Citation
Markus Ackermann, Marco Ajello, W. B. Atwood, Luca Baldini, Jean Ballet, et al.. Searches for Cosmic-Ray Electron Anisotropies with the Fermi Large Area Telescope. Physical Review D, American Physical Society, 2010, 82, pp.092003. ⟨10.1103/PhysRevD.82.092003⟩. ⟨in2p3-00585107⟩
Record views
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6681619882583618, "perplexity": 6100.553171963558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401578485.67/warc/CC-MAIN-20200927183616-20200927213616-00476.warc.gz"}
|
https://reperiendi.wordpress.com/2007/09/19/the-problem-of-evil-and-cartesian-categories/
|
# reperiendi
## Cartesian categories and the problem of evil
Posted in Category theory, Math by Mike Stay on 2007 September 19
How many one-element sets are there? Well, given any set $S,$ we can construct the one-element set $\{S\},$ so the collection of one-element sets has to be a proper class, a mindbogglingly enormous collection far larger than any mere set could be. However, they’re all the same from the point of view of functions coming out of them: the one element, no matter what it is, maps to a point in the range. The internal nature of a given one-element set is completely irrelevant to the way it behaves as a member of the category Set of all sets and functions.
For a category theorist, making a distinction between one-element sets is evil. Instead of looking inside an object to see how it’s made, we should only care about how it interacts with the world around it. There are certain kinds of objects that are naturally special because of the way they interact with everything else; we say they satisfy universal properties.
Just as it is evil to dwell on the differences between isomorphic one-element sets, it is evil to care about the inner workings of ordered pairs. Category theory elevates an ordered pair to a primitive concept by ignoring all details about the implementation of an ordered pair except how it interacts with the rest of the world. Ordered pairs are called “products” in category theory.
A product of the objects $G$ and $H$ in the category $C$ is
• an object of $C$, which we’ll label $G\times H,$ together with
• two maps, called projections
• $\pi_G:G\times H\to G$
• $\pi_H:G\times H\to H$
that satisfy the following universal property: for any triple $(X, f_G:X\to G, f_H:X \to H),$ there is a unique map from $X$ to $G\times H$ making the following diagram commute:
In particular, given two different representations of ordered pairs, there’s a unique way to map between them, so they must be isomorphic.
A category will either have products or it won’t:
——————
1. The category Set has the obvious cartesian product.
2. The trivial category has one object and one morphism, the identity, so there’s only one choice for a triple like the one in the definition:
$(X, 1_X, 1_X),$
and it’s clearly isomorphic to itself, so the trivial category has products.
3. A preorder is a set $S$ equipped with a $\le$ relation on pairs of elements of $S$ that is transitive and reflexive. Given any preorder, we can construct a category whose
• objects are the elements of $S$ and
• there is an arrow from $x$ to $y$ if $x \le y.$
So a product in a preorder is
• an element $z = x \times y$ of $S$ together with maps
• $\pi_x:z\to x$ (that is, $z \le x$)
• $\pi_y:z \to y$ (that is. $z \le y$)
such that for any other element $w \in S, \, w\le x, \, w \le y,$ we have $w \le z.$
In other words, a product $x \times y$ in a preorder is the greatest lower bound of $x, y$. For example, in the preorder $(\mathbb{R}, \le)$, the product of two numbers $x, y$ is min($x, y$). In the preorder $(\mbox{Set}, \subseteq)$, the product is $x \cap y$. In the preorder $(\mathbb{N}, |)$, where “|” is “divides”, the product is gcd($x, y$).
Exercise: in what preorder over $\mathbb{R}$ is the cartesian product the same as multiplication?
——————
A cartesian category is a category with products. (Note the lowercase ‘c:’ you know someone’s famous if things named after them aren’t capitalized. C.f. ‘abelian.’) You can think of the cartesian coordinate system for the plane with its ordered pair of coordinates to remind you that you can make ordered pairs of objects in a cartesian category.
### 6 Responses
1. Roshan said, on 2011 March 17 at 9:24 am
Nice writeup.
Btw you did mean z <= x in \pi_x instead of z < x (and similarly in \pi_y) didn't you?
2. a b said, on 2014 December 10 at 12:00 pm
Can I get a hint on the exercise?
• Mike Stay said, on 2014 December 10 at 12:37 pm
I don’t know if there *is* a preorder where the cartesian product is multiplication. In the category of finite sets and all functions between them, the cartesian product of two sets with cardinality m and n has cartinality mn. Groupoid cardinality (http://ncatlab.org/nlab/show/groupoid+cardinality) generalizes cardinality to a function from groupoids to nonnegative real numbers. We can generalize to other categories as described in http://arxiv.org/abs/math.CT/0212377 and get complex cardinalities; in all of these, the cardinality of the product is the product of the cardinalities. I think there may be a way to consider a real number to be a cardinality-equivalence class of such objects and get an arrow between two numbers if there’s a morphism between any two objects in the corresponding classes, but I haven’t proven it.
3. Joost Winter said, on 2015 January 28 at 1:53 pm
Let’s see… Assume there is such a preorder on R.
By reflexivity, we must have 4<=4 in the preorder. Because 2*2 equals 4, 4<2 must hold in the preorder. By a similar argument, 8<4 should hold in the preorder as 4*2 equals 8, however as we already know that 4<=2 and 4<=4 must hold, it follows that 4 is a lower bound of 2 and 4, and as 8<4 holds by one of our assumption, 8 cannot be the glb of 2 and 4. It thus follows that there cannot be a preorder on R (or on N, Q, Z for that matter) s.t. cartesian product/glb is multiplication.
4. Joost Winter said, on 2015 January 28 at 2:18 pm
A simpler counterexample is of course given by the fact that in any preorder with glbs, the glb of an element x with itself is always x… so the glb of 2 and 2 is always 2 (or some object isomorphic to 2) in any preorder with domain N/Z/Q/R…
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 38, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8649474382400513, "perplexity": 385.8775541672958}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317339.12/warc/CC-MAIN-20190822172901-20190822194901-00484.warc.gz"}
|
http://jphellemons.nl/?tag=/HTML&tag=/HTML
|
Configure shared mailbox Office 365 on Mac OS X mail.app
We have recently moved from hosted mail (pop3/imap) to Office365. This works perfectly with Outlook 2016 on Windows. The 2016 mac version of Outlook is also really good. However it just does not auto display the shared mailboxes. This can be easily solved.
just a simple, file –> open –> other users folder. and select the shared mailbox.
No outlook
It is a lot harder to add a shared mailbox to the default mail.app in Mac OS X. This manual configuration did not work. It suggests to use your main account and a slash and then the shared name or alias.
Here are the configuration details (which did not work for me)
IMAP:
server: outlook.office365.com
(if necessary: Port 993, SSL = ON and Password authentication)
SMTP:
server: smtp.office365.com
(if necessary: Port 587, SSL = ON and Password authentication)
I could not get it to work.
but this did work:
login as admin in office365 go to “users” and not the shared mailboxes.
There are users for shared mailboxes, but no one uses the user account. It is just internally needed for the mailboxes. Click the one that is corresponding with the shared mailbox and reset the password.
You can now use the credentials as above. Just use the pseudo user from the user list and the new password and as mail address the one from the shared mailbox
and outlook.office365.com as server. You can use the default mail.app in Mac OS X (10.10 or 10.11) It is a lot more work then with outlook, but if you are really fond of the default application, this is a solution to keep using it without moving to outlook.
Enjoy and good luck!
Pin on pinterest Plus on Googleplus Post on LinkedIn
Bulk add contacts to public folder shared contacts Office365/Outlook 2016
We have a lot of customer data in our system and I wanted to have a shared contact list with all the customer data for my co-workers.
I looking into some Office365 docs and found this walkthrough to create the folder: https://www.cogmotive.com/blog/office-365-tips/create-a-company-shared-contacts-folder-in-office-365
So afterwards you have an empty but shared contact folder. I thought that I needed the Microsoft Graph to access these contacts. Which would require my app to be in azure and it cannot have a command line only because of the authentication. More about that approach can be found here http://dev.office.com/getting-started/office365apis
But it seems that this is not needed for my simple one time import https://msdn.microsoft.com/en-us/office/office365/api/contacts-rest-operations
I thought that it might be a job for a PowerShell script and found this http://www.infinitconsulting.com/2015/01/bulk-import-contacts-to-office-365/
But there is an other option. Use Interop, because I as an Office365 user have Outlook 2016 on my Windows 10 machine and Visual Studio 2015. So create a new (console) application and add a reference to Outlook Interop:
I used this code to get to the “customers” address book:
using System;
using Outlook = Microsoft.Office.Interop.Outlook;
{
class Program
{
static void Main(string[] args)
{
var ap = new Outlook.Application();
foreach(Outlook.Folder f in ap.Session.Folders)
{
if (f.Name.ToLower().Contains("openbare")) // jp openbare mappen (public folders)
{
Console.WriteLine(f.FullFolderPath);
foreach (Outlook.Folder f2 in f.Folders)
{
if (f2.Name.ToLower().Contains("alle")) // alle openbare mappen (all public folders)
{
Console.WriteLine(f2.FullFolderPath);
foreach (Outlook.Folder f3 in f2.Folders)
{
if (f3.Name.ToLower().Contains("klanten")) // customers (folder name)
{
Console.WriteLine(f3.FullFolderPath);
foreach (Outlook.Folder f4 in f3.Folders)
{
{
Console.WriteLine(f4.FullFolderPath);
Console.WriteLine("----------");
/// display current items:
//Outlook.Items oItems = f4.Items;
//for (int i = 1; i <= oItems.Count; i++)
//{
// Outlook._ContactItem oContact = (Outlook._ContactItem)oItems[i];
// Console.WriteLine(oContact.FullName);
// oContact = null;
//}
// add test item:
Outlook.ContactItem newContact = (Outlook.ContactItem)ap.CreateItem(Outlook.OlItemType.olContactItem);
try
{
newContact.FirstName = "Jo";
newContact.LastName = "Berry";
newContact.CustomerID = "123456";
newContact.PrimaryTelephoneNumber = "(425)555-0111";
newContact.MailingAddressStreet = "123 Main St.";
newContact.Move(f4);
newContact.Save();
}
catch (Exception ex)
{
Console.WriteLine("The new contact was not saved. " + ex.Message);
}
}
}
}
}
}
}
}
}
}
}
}
This is not the prettiest code I have written. So there must be a better way. But this is just a one time application to loop through some database records and add contacts. So I will just leave it like this. Please let me know in the comments if you have suggestions to improve readability. The code snippet will give this contact in your (and the rest of the companies) outlook:
If you are looking how to import a folder full with vcf cards you should take a look at this MSDN article.
This should give you enough pointers to bulk add contacts to a shared folder in Office365 (Outlook 2016)
Good luck!
Pin on pinterest Plus on Googleplus Post on LinkedIn
Macbook air 13” 4mot/4/40000002: exhaust-0
Well the title is clear right? I ran the Apple Hardware Test (AHT) because our model was from 2012.
• Switched off the macbook
• power on macbook
• press and hold ‘D’
I ran the extended and more advance test which takes a bit longer. But it gave me the message:
4mot/4/40000002: exhaust-0
it said that the fan was broken. It was exactly what I expected due to the noise of the fan.
Our model was an A1466 and the fan replacement would cost me 145 euro at an Apple care center, so I bought the part online for 26 euro.
Here is the step by step ifixit guide. Please note that you need a P5 pentalobe screwdriver which is Apple specific. I bought it online for 5 euro.
I have spend 38 euro inc shippingcosts. You can get the screwdriver really cheap from aliexpress, but the shipping takes a long time and I cannot live that long without my macbook. Same for the fan it seems. You can get both for 9 dollar without shipping costs but it will take a while to arrive.
On the left is the new one, right side is the old one.
The hardest part was opening the zif connector to reattach the data kabel for the new fan.
The zif connector is so small that it is hard to open.
Running the hardware diagnostics tool can confirm that replacing the fan is a succes. You will not get the 4mot/4/40000002 exhaust error again.
Good luck with the fan replacement!
Pin on pinterest Plus on Googleplus Post on LinkedIn
Manage users for webform auth missing option in Visual Studio
I have blogged about this in November 2013 which is almost 2 years ago.
run as admin (windows key + X) command prompt as admin
C:\Program Files (x86)\IIS Express\iisexpress.exe" /path:C:\Windows\Microsoft.NET\Framework\v4.0.30319\ASP.NETWebAdminFiles /vpath:"/webadmin" /port:12345 /clr:4.0 /ntlm
and then there was this stacktrace and error… I have not had this issue before. It was since the 4.6 framework was installed.
Thankfully I found this stackoverflow answer which resolves it:
in notepad++ and ctrl+g (go to) line 989 and give the string appId the value of “1” instead of the stringutil.blabla
save and reload the page.
No more error on:
StringUtil.GetNonRandomizedHashCode(String.Concat(appPath, appPhysPath)).ToString("x", CultureInfo.InvariantCulture);
Good luck!
Pin on pinterest Plus on Googleplus Post on LinkedIn
Calculate km distance between two latitude longitude points from two floats
• Posted in:
• SQL
Having a store locator on my to do list, I started months back with storing the latitude and longitude for the stores in a database. I forked this repository in 2012 https://github.com/bjorn2404/jQuery-Store-Locator-Plugin in order to have something like a store locator on our website. The downside to that store locator is that it is all client side and puts a copy of all your customer data on the internet. It is great data for your competition. So to disclose as little as possible and still serve your valued customers, I decided to do the calculation on the backend. Closest to the data. So that would be on the SQL server.
SQL Server has datatypes like points and geometry since 2008. With this SQL statement you can get the distance between my location
lat 51.69917 lng 5.30417 and the locations in the stores table.
SELECT top 10 StoreID
,StoreName
,StoreLatitude
,StoreLongitude
,StoreCity
,round((geography::Point(StoreLatitude, StoreLongitude, 4326).STDistance(geography::Point(51.69917, 5.30417, 4326))) / 1000, 1) as km
FROM STORES
where StoreLongitude < 150 and StoreLongitude > -150 and StoreLatitude < 90 and StoreLatitude > -90
order by (geography::Point(StoreLatitude, StoreLongitude, 4326).STDistance(geography::Point(51.69917, 5.30417, 4326)))
The third parameter for the point data type is 4326 which is an enumeration:
And about 4326 SRID. It also called WSG84(http://en.wikipedia.org/wiki/World_Geodetic_System) system. It's the most common system to represent point on spherical(not flat) earth. It uses degree,minute,second notation and it's x and y coordinates are usually called latitude and longitude.
SRID (Spatial Reference System Identifier) : https://en.wikipedia.org/wiki/SRID
This post is more of a reference for me to lookup how to calculate distances on the Microsoft SQL server in SQL. But maybe it helps others too.
Good luck!
Pin on pinterest Plus on Googleplus Post on LinkedIn
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21831242740154266, "perplexity": 4151.613832920513}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257823947.97/warc/CC-MAIN-20160723071023-00141-ip-10-185-27-174.ec2.internal.warc.gz"}
|
https://arxiver.wordpress.com/2016/09/08/a-new-correlation-with-lower-kilohertz-quasi-periodic-oscillation-frequency-in-the-ensemble-of-low-mass-x-ray-binaries-heap/
|
# A new correlation with lower kilohertz quasi-periodic oscillation frequency in the ensemble of low-mass X-ray binaries [HEAP]
We study the dependence of kHz quasi-periodic oscillation (QPO) frequency on accretion-related parameters in the ensemble of neutron star low-mass X-ray binaries. Based on the mass accretion rate, $\dot{M}$, and the magnetic field strength, $B$, on the surface of the neutron star, we find a correlation between the lower kHz QPO frequency and $\dot{M}/B^{2}$. The correlation holds in the current ensemble of Z and atoll sources and therefore can explain the lack of correlation between the kHz QPO frequency and X-ray luminosity in the same ensemble. The average run of lower kHz QPO frequencies throughout the correlation can be described by a power-law fit to source data. The simple power-law, however, cannot describe the frequency distribution in an individual source. The model function fit to frequency data, on the other hand, can account for the observed distribution of lower kHz QPO frequencies in the case of individual sources as well as the ensemble of sources. The model function depends on the basic length scales such as the magnetospheric radius and the radial width of the boundary region, both of which are expected to vary with $\dot{M}$ to determine the QPO frequencies. In addition to modifying the length scales and hence the QPO frequencies, the variation in $\dot{M}$, being sufficiently large, may also lead to distinct accretion regimes, which would be characterized by Z and atoll phases.
M. Erkut, S. Duran, O. Catmabacak, et. al.
Thu, 8 Sep 16
58/60
Comments: 11 pages, 5 figures, 3 tables, accepted for publication in The Astrophysical Journal
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9758796095848083, "perplexity": 1086.6643294413207}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825916.52/warc/CC-MAIN-20181214140721-20181214162221-00045.warc.gz"}
|
http://www.ams.org/joursearch/servlet/PubSearch?f1=msc&pubname=all&v1=14E05&startRec=31
|
AMS eContent Search Results
Matches for: msc=(14E05) AND publication=(all) Sort order: Date Format: Standard display
Results: 31 to 60 of 64 found Go to page: 1 2 3
[31] Masayuki Kawakita. General elephants of three-fold divisorial contractions. J. Amer. Math. Soc. 16 (2003) 331-362. MR 1949163. Abstract, references, and article information View Article: PDF This article is available free of charge [32] Giuseppe Pareschi and Mihnea Popa. Regularity on abelian varieties I. J. Amer. Math. Soc. 16 (2003) 285-302. MR 1949161. Abstract, references, and article information View Article: PDF This article is available free of charge [33] Aron Simis and Rafael H. Villarreal. Constraints for the normality of monomial subrings and birationality. Proc. Amer. Math. Soc. 131 (2003) 2043-2048. MR 1963748. Abstract, references, and article information View Article: PDF This article is available free of charge [34] Shigeharu Takayama. Iitaka's fibrations via multiplier ideals. Trans. Amer. Math. Soc. 355 (2003) 37-47. MR 1928076. Abstract, references, and article information View Article: PDF This article is available free of charge [35] Dan Abramovich, Kalle Karu, Kenji Matsuki and Jaroslaw Wlodarczyk. Torification and factorization of birational maps. J. Amer. Math. Soc. 15 (2002) 531-572. MR 1896232. Abstract, references, and article information View Article: PDF This article is available free of charge [36] Jaime Gutierrez, Rosario Rubio and Jie-Tai Yu. $D$-resultant for rational functions. Proc. Amer. Math. Soc. 130 (2002) 2237-2246. MR 1896403. Abstract, references, and article information View Article: PDF This article is available free of charge [37] Yasuyuki Kachi and Eiichi Sato. Segre's reflexivity and an inductive characterization of hyperquadrics. Memoirs of the AMS 160 (2002) MR 1938329. Book volume table of contents [38] Yildiray Ozan. On homology of real algebraic varieties. Proc. Amer. Math. Soc. 129 (2001) 3167-3175. MR 1844989. Abstract, references, and article information View Article: PDF This article is available free of charge [39] Vitaly Vologodsky. On birational morphisms between pencils of Del Pezzo surfaces. Proc. Amer. Math. Soc. 129 (2001) 2227-2234. MR 1823904. Abstract, references, and article information View Article: PDF This article is available free of charge [40] Meng Chen. The relative pluricanonical stability for 3-folds of general type. Proc. Amer. Math. Soc. 129 (2001) 1927-1937. MR 1825899. Abstract, references, and article information View Article: PDF This article is available free of charge [41] János Kollár. Real algebraic threefolds II. Minimal model program. J. Amer. Math. Soc. 12 (1999) 33-83. MR 1639616. Abstract, references, and article information View Article: PDF This article is available free of charge [42] Steven Kleiman and Bernd Ulrich. Gorenstein algebras, symmetric matrices, self-linked ideals, and symbolic powers. Trans. Amer. Math. Soc. 349 (1997) 4973-5000. MR 1422609. Abstract, references, and article information View Article: PDF This article is available free of charge [43] Jaroslaw Wlodarczyk. Decomposition of Birational Toric Maps in Blow-Ups and Blow-Downs. Trans. Amer. Math. Soc. 349 (1997) 373-411. MR 1370654. Abstract, references, and article information View Article: PDF This article is available free of charge [44] Dong-Kwan Shin. On the pluricanonical map of threefolds of general type. Proc. Amer. Math. Soc. 124 (1996) 3641-3646. MR 1389536. Abstract, references, and article information View Article: PDF This article is available free of charge [45] János Kollár. Nonrational hypersurfaces . J. Amer. Math. Soc. 8 (1995) 241-249. MR 1273416. Abstract, references, and article information View Article: PDF This article is available free of charge [46] Peter Hall. Uniqueness theorems for parametrized algebraic curves . Trans. Amer. Math. Soc. 341 (1994) 829-840. MR 1144014. Abstract, references, and article information View Article: PDF This article is available free of charge [47] Arno van den Essen. Locally finite and locally nilpotent derivations with applications to polynomial flows, morphisms and $\mathcal{G}_a$-actions. II . Proc. Amer. Math. Soc. 121 (1994) 667-678. MR 1185282. Abstract, references, and article information View Article: PDF This article is available free of charge [48] János Kollár and Shigefumi Mori. Classification of three-dimensional flips . J. Amer. Math. Soc. 5 (1992) 533-703. MR 1149195. Abstract, references, and article information View Article: PDF This article is available free of charge [49] Caterina Cumino, Silvio Greco and Mirella Manaresi. Hyperplane sections of weakly normal varieties in positive characteristic . Proc. Amer. Math. Soc. 106 (1989) 37-42. MR 953739. Abstract, references, and article information View Article: PDF This article is available free of charge [50] Bernard Johnston. The uniform bound problem for local birational nonsingular morphisms . Trans. Amer. Math. Soc. 312 (1989) 421-431. MR 983873. Abstract, references, and article information View Article: PDF This article is available free of charge [51] Krzysztof Kurdyka and Kamil Rusek. Polynomial-rational bijections of ${\bf R}\sp n$ . Proc. Amer. Math. Soc. 102 (1988) 804-808. MR 934846. Abstract, references, and article information View Article: PDF This article is available free of charge [52] János Kollár. The structure of algebraic threefolds: an introduction to Mori's program. Bull. Amer. Math. Soc. 17 (1987) 211-273. MR 903730. Abstract, references, and article information View Article: PDF [53] Brian Harbourne. Rational surfaces with infinite automorphism group and no antipluricanonical curve . Proc. Amer. Math. Soc. 99 (1987) 409-414. MR 875372. Abstract, references, and article information View Article: PDF This article is available free of charge [54] Ciro Ciliberto. On a property of Castelnuovo varieties . Trans. Amer. Math. Soc. 303 (1987) 201-210. MR 896017. Abstract, references, and article information View Article: PDF This article is available free of charge [55] Hyman Bass, Edwin H. Connell and David Wright. The Jacobian conjecture: Reduction of degree and formal expansion of the inverse. Bull. Amer. Math. Soc. 7 (1982) 287-330. MR 663785. Abstract, references, and article information View Article: PDF [56] Shoshichi Kobayashi. Intrinsic distances, measures and geometric function theory. Bull. Amer. Math. Soc. 82 (1976) 357-416. MR 0414940. Abstract, references, and article information View Article: PDF [57] Morris Marden. On the zeros of linear partial fractions . Trans. Amer. Math. Soc. 32 (1930) 81-109. MR 1501527. Abstract, references, and article information View Article: PDF This article is available free of charge [58] J. F. Ritt. Equivalent rational substitutions . Trans. Amer. Math. Soc. 26 (1924) 221-229. MR 1501274. Abstract, references, and article information View Article: PDF This article is available free of charge [59] F. R. Sharpe and Virgil Snyder. The $(1,2)$ correspondence associated with the cubic space involution of order two . Trans. Amer. Math. Soc. 25 (1923) 1-12. MR 1501230. Abstract, references, and article information View Article: PDF This article is available free of charge [60] Gilbert Ames Bliss. Birational transformations simplifying singularities of algebraic curves . Trans. Amer. Math. Soc. 24 (1922) 274-285. MR 1501226. Abstract, references, and article information View Article: PDF This article is available free of charge
Results: 31 to 60 of 64 found Go to page: 1 2 3
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9405291676521301, "perplexity": 2093.4116768522817}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805578.23/warc/CC-MAIN-20171119115102-20171119135102-00018.warc.gz"}
|
http://www.ck12.org/physical-science/Calculating-Acceleration-from-Force-and-Mass-in-Physical-Science/asmtpractice/Calculating-Acceleration-from-Force-and-Mass-in-Physical-Science-Practice/r1/
|
<meta http-equiv="refresh" content="1; url=/nojavascript/">
# Calculating Acceleration from Force and Mass
The acceleration of an object equals the net force acting on it divided by its mass.
%
Progress
Progress
%
Calculating Acceleration from Force and Mass Practice
SCI.PSC.214.31
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9049716591835022, "perplexity": 3171.6225148001017}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1433195036776.1/warc/CC-MAIN-20150601214356-00090-ip-10-180-206-219.ec2.internal.warc.gz"}
|
https://runescape.fandom.com/wiki/Wilderness_Warbands
|
## FANDOM
44,121 Pages
This is a dangerous Distraction and Diversion.If you die, you will lose your items and will need to reclaim them from Death or your grave.
This article has a strategy guide.All information on mechanics, setups, and strategy are on the subpage.
This activity may have associated player-run services, such as Friends Chat channels. Visit the official Minigames and D&Ds forum for more information.
Wilderness Warbands is a dangerous members Distraction & Diversion, composed of heavily defended storage camps guarded by the followers of the different Gods. These camps are founded to gain an advantage for the followers of a particular God once their God returns. They will set up camp in the Wilderness every 7 hours, one time at each of the three locations. Quercus, an ancient seer Ent, will send out adventurers to fight in the camps to balance the followers of the different Gods in Guthix's name. Quercus can be found just outside the Grand Exchange agility shortcut. Warbands events are synchronised on each world.
The activity was released on 25 March 2013 and was first mentioned in Jagex's special Behind the Scenes post.[1]
Wilderness Warbands give large amounts of experience in Farming, Construction, Herblore, Mining, or Smithing. While no combat is necessary and you do not need any items to loot, it is still highly contested between players which may result in the loss of most rewards.
This repeatable content has a hard reset. The next period's content will become available immediately, regardless of whether a player has logged out.
## Time for the next warbands
As a new camp is set up every 7 hours, the time at which these camps takes place is repeated every week. You can find the times (UTC/Game Time) a camp is set up given the day of the week:
Monday Tuesday Wednesday Thursday Friday Saturday Sunday
02:00 UTC 06:00 UTC 03:00 UTC 00:00 UTC 04:00 UTC 01:00 UTC 05:00 UTC
09:00 UTC 13:00 UTC 10:00 UTC 07:00 UTC 11:00 UTC 08:00 UTC 12:00 UTC
16:00 UTC 20:00 UTC 17:00 UTC 14:00 UTC 18:00 UTC 15:00 UTC 19:00 UTC
23:00 UTC N/A N/A 21:00 UTC N/A 22:00 UTC N/A
## Locations
Location Wilderness level Travel Map
South of the Dark Warriors' Fortress
(Call out is 'DWF')
10
South of the Red Dragon Isle
(Call out is 'RDI')
31
East of the Lava Maze
(Call out is 'ELM')
41
## Quick Guide
1. A warband has appeared when you see a message like "warbands sighted east of X!" in your chat log.
2. Bank all your items and equipment.
3. Teleport into the wilderness.
4. Go to the location mentioned in the message.
5. Click on the "beam" to get 10K prayer XP (if any exists by the time you get there). Click on the "beam" a second time to get 10K summoning XP.
6. Wait for the other players to kill all the guards, or help in the fight if you are equipped.
7. Click on the tent that you wish to get XP in.
8. Collect as many resources as possible, and fill your inventory to the limit.
9. Head back to the wilderness wall in a random direction in order to avoid PKers (teleports don't work).
10. Cross over the wall and head to Quercus (the Ent to the north-west of the GE).
11. Right-click Quercus and select Rewards, choose "Gain XP".
## Gameplay
Players will be notified via a server-wide notice when a Wilderness camp appears. To find a camp after missing the notice, you can talk to Quercus in Edgeville, near the Grand Exchange shortcut. Quercus will also inform you when the next camp is if there is not one currently occurring. A wilderness sword 4 may be used to teleport directly to a Warband camp once per day.
### Arriving at the camp
Players must then head to the camp and avoid any attention by the sentries posted around the camp. Most of the camp's followers always face towards the centre of the camp and are incapable of noticing the player: You must pay attention to the pair of followers sparring with each other, as one of them can spot you if you come too close to the camp's outer edge.
### Interrupting the beam
Players can then interrupt the god's magic beam in the middle of the camp by finding a spot outside the camp, but close enough to the beam in a direct line of sight. The draining/siphoning is similar to the Runespan and a progress bar will show when the interruption is complete.
### Summoning reinforcements
After successful interruption, players can then use the magic beam in the middle of the camp to summon reinforcement NPCs. Stay in a spot outside the camp, but close enough to the beam in a direct line of sight. The draining/siphoning is similar to the Runespan and a progress bar will show when the summoning is complete.
Succeeding in the sabotage will teleport in allied NPCs to attack the enemy warband, and grants five additional minutes to kill the warband and loot the tents before the camp collapses. Failing to stealthily sabotage the beam results in the whole camp turning hostile without any reinforcements to distract them from attacking players, and only five minutes to clear the camp and loot the tents.
### Clearing the camp
Players must kill all enemies, including the general, before looting. Players can intervene in the fight if they choose to, or let the NPC reinforcements clear the camp. Players should be careful when fighting the warbands, as they use abilities, and the general can execute powerful ultimate abilities, including a variant of Meteor Strike.
The warlords have extremely high defence if the minions are not cleared beforehand, so it is highly recommended to kill them before focusing on the warlord.
### Looting
Participating in looting will skull you, and holding the looting supplies in your inventory will prevent you from teleporting. Players can be attacked whilst looting without any combat level restrictions.
Even after the general dies, any remaining enemy followers will need to be killed before you can loot the tents. The warband camp has five tents, but only three out of the five tents will be filled with supplies, corresponding to a random mix of three of the five skills: Farming, Construction, Herblore, Mining, and Smithing. Players will want to collect as many supplies as possible in order to earn the most rewards.
You can directly loot a total of 25 supplies per camp from tents before you have to go to Quercus to turn them in. There are only 1,000 pieces of loot in the camp, meaning that up to 40 players can take maximum loot.
You can loot from up to 3 camps and obtain up to 75 supplies every day, resetting at midnight UTC. The 3 camps looted must be on different worlds. Even if you are killed and lose supplies, your attempt will still count as one of the three. However you can return to the same camp to try again without penalty. This generally means that you can loot a maximum of 75 items per day, but only if you manage to get the maximum of 25 from every camp. However, hopping worlds or relogging counts as another try for looting if the player has already made a previous attempt during the event.
Supplies can also be looted from other players by killing them and taking the dropped supplies. The 25 supply limit per camp does not apply to supplies picked up from player drops, but does apply to the daily 75 supply maximum. Once you've hit the daily limit, attempting to pick up more supplies from the ground by player killing will result in the message "You have already looted all you can for today." The same limits apply for supplies dropped due to your own death; that is, picking up the same 25 supplies due to being killed, after obtaining 75 for the day will be given the message.
While looting supplies from the tents, players may randomly receive a wand of treachery. Looting a Wand of treachery will not count towards the 25 supply limit. If a wand of treachery has been looted, the game will announce to everybody in the vicinity that a wand has been found, and the player who has looted it will glow for 20 seconds, as well as have their Prayer points reduced to zero. Only one wand can be found per camp, though no looter is guaranteed to get it. If the player holding the wand is killed, the wand will be dropped for others to acquire. The next subsequent player to acquire the wand will not glow or have their Prayer points reduced to zero.
Logging out or disconnecting while inside the Wilderness will immediately remove all warbands supplies. Additionally, looting, then hopping to loot once more counts as another instance of Warbands against the daily three.
Once supplies are looted, players are tele-blocked and must walk/run back to Quercus to exchange them for rewards.
### Leaving the Wilderness
The most common method of leaving the Wilderness is to simply run south from the Warband camps. As teleportation cannot be used, running to level 20 Wilderness and attempting to teleport out (or level 30 Wilderness with Amulet of Glory or Wilderness Sword) will not work. Additionally, the Wilderness Obelisks cannot be used. However, if running due south is not feasible, perhaps due to attacking players, there are alternate routes for leaving the Wilderness.
Remember that only leaving the Wilderness via the Wilderness wall can be done when in combat.
• Temporary safe zones: These are safe zones in the Wilderness and can be used to escape attackers, but are not practical for leaving the Wilderness as the only non-teleportation method out is to re-enter the Wilderness. However, Warband supplies are converted to the untradable variant within these zones. Thus, they can be used to hop worlds or to logout without losing Warband supplies.
• Corporeal Beast Lair: found in level 33 Wilderness, east of the Lava Maze camp, and west of the Red Dragon Isle camp.
• Death Plateau Beacon: found in level 13 Wilderness, west of the Dark Warriors Fortress camp. Prior to initial use, the ladder must first be repaired with 2 planks and 4 nails. This only has to be completed once. Players can only start the process of climbing the ladder when out of combat, but as the ladder is still in the Wilderness, and the climbing animation takes some time, players can still be attacked when climbing the ladder. Getting attacked after already climbing the ladder will not interrupt the process.
• The Pit, from the Wilderness Agility Course: The Talent Scout will teleport players out of the Wilderness when holding Warband supplies. However, this requires completing several laps of the course, using a D&D Token (daily) or requires wearing an Agility skillcape. This is not recommended as it requires 99 Agility and bringing a skillcape to be reliable, cannot be used in combat, and is next to a permanent exit from the Wilderness at Ghorrock.
• Alternate Wilderness exits: These allow the player to leave the Wilderness and return to Quercus without re-entering the Wilderness. These routes are only recommended when escaping attackers or when no more looting is desired, as they take more time than simply running south from the camps, so it is unlikely that multiple camps can be looted if using one of these routes.
• Daemonheim: The Wilderness entrance to Daemonheim is found in level 12 Wilderness on the easternmost side, so it is fairly close when running due south from the Red Dragon Isle camp. Supplies can be returned to Quercus without returning to the Wilderness by taking the boat to Taverley or Al Kharid, and running to Edgeville from Taverley, or taking the canoe from Lumbridge.
• Ghorrock: The Wilderness entrance to Ghorrock is found in the Ice Plateau in level 50 Wilderness, in the far northwest corner, after The Temple at Senntisten, and with a heat globe on the pedastal. Players can only start the process of squeezing past the ice block when out of combat, but as the ice block is still in the Wilderness, and the animation takes some time, players can still be attacked when squeezing past the ice block. Getting attacked after the animation has started will not interrupt the process. There are several ways to return to Quercus from Ghorrock without returning to the Wilderness:
• Chaos Tunnels: Any of the numerous rifts leading to the Chaos Tunnels can be used. Chaos Tunnels transporters are not blocked by holding Warband supplies. However, as all of the Chaos Tunnels entrances are in low-level Wilderness, it's generally easier to simply run south rather than navigate the Chaos Tunnels.
## Enemies
Various enemies may be encountered while looting camps. Two of each type of follower is in a camp, and each camp has one general. Attacking them will cause the player to automatically switch to multi-combat (if in single-combat).
Note: All followers are level 120.
(Players of all levels can attack you while participating in this activity.)
Magic Arrows Shaman Wild mage Thaumaturge Occultist
Melee Fire spells Myrmidon Reaver Enforcer Blackguard
Ranged Slash Skirmisher Hunter Scout Bandit
Magic/Melee Nothing Archon Warlord Sergeant Demon lord
## Rewards
Note: If you enter the lobby or try to hop worlds then your supplies are automatically destroyed.
Warbands grant experience in a number of skills during participation and additional rewards such as coins or experience are earned by trading in loot from the camp. Possible rewards include:
When handing in supplies to Quercus, the experience earned is determined by the formula:
$\text{xp}(s, x) = \frac{s}{2} \left(x^2 - 2x + 100 \right)$
where s is the number of supplies handed in and where x is your skill's level.
The Prayer and Summoning experience for the beam is the same as Master jack of trades aura, which means:
$2\left(x^2-2x+100\right)$
where x is the level of Prayer/Summoning.
Slayer experience from killing the camp leader is half of the Master jack of trades aura. This gives a formula of:
$\left(x^2-2x+100\right)$
The supplies can also be handed in to Quercus for 4,000 coins each, totaling up to 300,000 coins per day if the maximum of 75 supplies are collected.
template = Template:Wilderness Warbands calc
form = ppf
result = ppr
param = 1|Number of supplies|1|int|1-75
param = 2|Skill level|1|int|1-99
Wilderness Warbands Calculator
Result
49.5 experience
## Trivia
• Assuming players hand in 25 Skill supplies from each warband, it would take a player participation in 148 warbands (a minimum of 50 days) to go from level 1 to level 99 in a skill. It would take a player participation in 1685 warbands (a minimum of 562 days) to go from level 1 to 200 million experience in a skill. For both of the calculations, it's been assumed that players go to all 3 camps every day, which spawn every 7 hours, and manage loot and escape with 25 supplies for every camp.
• When reinforcements come in to fight a warband, they start off by attacking a follower who is weak to their combat style. For example, a Saradominist thaumaturge (magic user) will first attack a Bandosian reaver (melee user).
• Each follower is weak to a specific attack style that another follower uses. The mages are weak to arrows, which are used by the rangers; the rangers are weak to slash weapons, which the warriors use; and the warriors are weak to fire spells, which the mages use.
• When a general uses his magic attack on a player, ALL players within range will be hit.
• Prior to a hidden update, warband followers did not use abilities when fighting each other, only when fighting players. This has since been fixed, and they now use abilities against each other.
• Warbands are one of few Distraction and Diversions that cannot be reset using a D&D token.
• Warbands were "tweaked" on 11 June 2013. The tweaks were:
• Synchronized worlds, as before players could hop from world to world to gain massive amounts of XP (and possibly profit) per warband. Each one now occurs every 7 hours.
• Attempting to log into another world while in the wilderness causes all of the looted supplies to disappear from your inventory.
• There are only 1000 lootable supplies per camp. This is equal to 40 players with full loot, since 25 is the maximum amount of supplies per person per camp.
• Quercus will now exchange the supplies for XP on their respective skill, or turn them into 4000 coins per supply.
• Whenever a player entering the camp has been spotted by a Warband NPC, an exclamation mark can be seen over their heads. This is most likely a reference to the Metal Gear Solid game series.
• Strangely, while melee users for each faction of warbands have different weapons, range and magic users all use the same bow and staff respectively.
• If dropped supplies are left unclaimed after you or another player dies, it is possible to recover them if you can get to the spot where they / you died quickly enough. Picking up supplies from a dead player does not skull you (though obviously attacking a player will), and a small number of supplies may be saved if you manage to die without a skull. However, any supplies picked up from the ground will still count against the 75 daily supply cap.
• Several teleportation methods can still be used when holding Warband supplies:
• Fairy rings were originally usable while holding warband supplies, this feature was removed with the introduction of the Portable fairy ring.
## References
1. ^ Mod Mark. "Behind the Scenes: Special Edition." 16 November 2012. RuneScape News.
Community content is available under CC-BY-SA unless otherwise noted.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27908623218536377, "perplexity": 4374.259593062163}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986666959.47/warc/CC-MAIN-20191016090425-20191016113925-00227.warc.gz"}
|
https://link.springer.com/article/10.1007%2Fs10035-016-0624-2
|
Granular Matter
, 18:58
# Memory of jamming–multiscale models for soft and granular matter
• Nishant Kumar
• Stefan Luding
Open Access
Original Paper
Part of the following topical collections:
1. Micro origins for macro behavior of granular matter
## Abstract
Soft, disordered, micro-structured materials are ubiquitous in nature and industry, and are different from ordinary fluids or solids, with unusual, interesting static and flow properties. The transition from fluid to solid—at the so-called jamming density—features a multitude of complex mechanisms, but there is no unified theoretical framework that explains them all. In this study, a simple yet quantitative and predictive model is presented, which allows for a changing jamming density, encompassing the memory of the deformation history and explaining a multitude of phenomena at and around jamming. The jamming density, now introduced as a new state-variable, changes due to the deformation history and relates the system’s macroscopic response to its micro-structure. The packing efficiency can increase logarithmically slow under gentle repeated (isotropic) compression, leading to an increase of the jamming density. In contrast, shear deformations cause anisotropy, changing the packing efficiency exponentially fast with either dilatancy or compactancy as result. The memory of the system near jamming can be explained by a micro-statistical model that involves a multiscale, fractal energy landscape and links the microscopic particle picture to the macroscopic continuum description, providing a unified explanation for the qualitatively different flow-behavior for different deformation modes. To complement our work, a recipe to extract the history-dependent jamming density from experimentally accessible data is proposed, and alternative state-variables are compared. The proposed simple macroscopic constitutive model is calibrated from particles simulation data, with the variable jamming density—resembling the memory of microstructure—as essential novel ingredient. This approach can help understanding predicting and mitigating failure of structures or geophysical hazards, and will bring forward industrial process design and optimization, and help solving scientific challenges in fundamental research.
## Keywords
Jamming Structure Anisotropy Dilatancy Creep/relaxation Memory Critical state
## 1 Introduction
Granular materials are a special case of soft-matter with micro-structure, as also foams, colloidal systems, glasses, or emulsions [1, 2, 3]. Particles can flow through a hopper or an hour-glass when shaken, but jam (solidify) when the shaking stops [4]. These materials jam above a “certain” volume fraction, or jamming density, referred to as the “jamming point” or “jamming density” [3, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], and become mechanically stable with finite bulk- and shear-moduli [8, 9, 12, 15, 24, 25, 26, 27]. Notably, in the jammed state, these systems can “flow” by reorganizations of their micro-structure [28, 29]. Around the jamming transition, these systems display considerable inhomogeneity, such as reflected by over-population of weak/soft/slow mechanical oscillation modes [11], force-networks [10, 30, 31], diverging correlation lengths and relaxation time-scales [9, 13, 22, 32, 33, 34, 35], and some universal scaling behaviors [36, 37]. Related to jamming, but at all densities, other phenomena occur, like shear-strain localization [12, 16, 38, 39, 40], anisotropic evolution of structure and stress [7, 9, 11, 13, 30, 31, 38, 39, 40, 41, 42, 43, 44, 45, 46], and force chain inhomogeneity [7, 19, 28]. To gain a better understanding of the jamming transition concept, one needs to consider both the structure (positions and contacts) and contact forces. Both of them illustrate and reflect the transition, e.g., with a strong force chain network percolating the full system and thus making unstable packings permanent, stable and rigid [7, 19, 47, 48, 49].
For many years, scientists and researchers have considered the jamming transition in granular materials to occur at a particular volume fraction, $$\phi _J$$ [50]. In contrast, over the last decade, numerous experiments and computer simulations have suggested the existence of a broad range of $$\phi _J$$, even for a given material. It was shown that the critical density for the jamming transition depends on the preparation protocol [12, 18, 22, 23, 36, 51, 52, 53, 54, 55, 56, 57, 58], and that this state-variable can be used to describe and scale macroscopic properties of the system [26]. For example, rheological studies have shown that $$\phi _J$$ decreases with increasing compression rate [8, 57, 59, 60] (or with increasing growth rate of the particles), with the critical scaling by the distance from the jamming point ($$\phi - \phi _J$$) being universal and independent of $$\phi _J$$ [20, 36, 51, 61, 62] Recently, the notion of an a-thermal isotropic jamming “point” was challenged due to its protocol dependence, suggesting the extension of the jamming point, to become a J-segment [42, 60, 63, 64]. Furthermore, it was shown experimentally, that for a tapped, unjammed frictional 2D systems, shear can jam the system (known as “shear jamming”), with force chain networks percolating throughout the system, making the assemblies jammed, rigid and stable [7, 29, 47, 48, 65, 66], all highlighting a memory that makes the structure dependent on history H. But to the best of our knowledge, quantitative characterization of the varying/moving/changing transition points, based on H, remains a major open challenge.
### 1.1 Application examples
In the fields of material science, civil engineering and geophysics, the materials behave highly hysteretic, non-linear and involve irreversibility (plasticity), possibly already at very small deformations, due to particle rearrangements, more visible near the jamming transition [67, 68, 69, 70]. Many industrial and geotechnical applications that are crucial for our society involve structures that are designed to be far from failure (e.g. shallow foundations or underlying infrastructure), since the understanding when failure and flow happens is not sufficient, but is essential for the realistic prediction of ground movements [71]. Finite-element analyses of, for example tunnels, depend on the model adopted for the pre-failure soil behavior; when surface settlement is considered, the models predicting non-linear elasticity and history dependence become of utmost importance [72]. Design and licensing of infrastructure such as nuclear plants and long span bridges are dependent on a robust knowledge of elastic properties in order to predict their response to seismic ground motion such as the risk of liquefaction and the effect of the presence of anisotropic strata. (Sediments are one example of anisotropic granular materials of particles of organic or inorganic origin that accumulate in a loose, unconsolidated form before they are compacted and solidified. Knowing their mechanical behavior is important in industrial, geotechnical and geophysical applications. For instance, the elastic properties of high-porosity ocean-bottom sediments have a massive impact on unconventional resource exploration and exploitation by ocean drilling programs.)
When looking at natural flows, a complete description of the granular rheology should include an elastic regime [73], and the onset of failure (flow or unjamming) deserves particular attention in this context. The material parameters have a profound influence on the computed deformations prior to failure [74, 75], as the information on the material state is usually embedded in the parameters. Likewise, also for the onset of flow, the state of the material is characterized by the value of the macroscopic friction angle, as obtained, e.g., from shear box experiments or tri-axial tests. Since any predictive model must describe the pre-failure deformation [76] as well as the onset of flow (unjamming) of the material, many studies have been devoted to the characteristics of geomaterials (e.g., tangent moduli, secant moduli, peak strength) and to the post-failure regime [77] or the steady (critical) state flow rheology, see Refs. [40, 78] and references therein.
### 1.2 Approach of this study
Here, we consider frictionless sphere assemblies in a periodic system, which can help to elegantly probe the behavior of disordered bulk granular matter, allowing to focus on the structure [3], without being disturbed by other non-linearities [7, 29, 79] (as e.g. friction, cohesion, walls, environmental fluids or non-linear interaction laws). For frictionless assemblies, it is often assumed that the influence of memory is of little importance, maybe even negligible. If one really looks close enough, however, its relevance becomes evident. We quantitatively explore its structural origin in systems where the re-arrangements of the micro-structure (contact network) are the only possible mechanisms leading to the range of jamming densities (points), i.e. a variable state-variable jamming density.
In this study, we probe the jamming transition concept by two pure deformation modes: isotropic compression or “tapping” and deviatoric pure shear (volume conserving), which allow us to combine the J-segment concept with a history dependent jamming density.1 Assuming that all other deformations can be superimposed by these two pure modes, we coalesce the two concepts of isotropic and shear induced jamming, and provide the unified model picture, involving a multiscale, fractal-type energy landscape [18, 80, 81, 82]; in general, deformation (or the preparation procedure) modify the landscape and its population; considering only changes of the population already allows to establish new configurations and to predict their evolution. The observations of different $$\phi _J$$ of a single material require an alternative interpretation of the classical “jamming diagram” [5].
Our results will provide a unified picture, including some answers to the open questions from literature: (i) What lies in between the jammed and flowing (unjammed) regime? As posed by Ciamarra et al. [63]. (ii) Is there an absolute minimum jamming density? As posed by Ciamarra et al. [63]. (iii) What protocols can generate jammed states? As posed by Torquato et al. [56]. (iv) What happens to the jamming and shear jamming regime in 3D and is friction important to observe it? As posed by Bi et al. [7]. Eventually, accepting the fact that the jamming density is changing with deformation history, significant improvement of continuum models is expected, not only for classical elasto-plastic or rheology models, but also, e.g., for anisotropic constitutive models [41, 69, 83, 84], GSH rate type models [85, 86], Cosserat micro-polar or hypoplastic models [87, 88, 89] or continuum models with a length scale and non-locality [90, 91]. For this purpose we provide a simple (usable) analytical macro/continuum model as generalization of continuum models by adding one isotropic state-variable. Only allowing $$\phi _J(H)$$ to be dependent on history H [64, 92], as key modification, explains a multitude of reported observations and can be significant step forward to solve real-world problems in e.g. electronic industry related novel materials, geophysics or mechanical engineering.
Recent works showed already that, along with the classical macroscopic properties (stress and volume fraction), the structural anisotropy is an important [41, 45, 46, 93, 94, 95, 96] state-variable for granular materials, as quantified by the fabric tensor [43, 69] that characterizes, on average, the geometric arrangement of the particles, the contacts and their network, i.e. the microstructure of the particle packing. Note that the anisotropy alone is not enough to characterize the structure, but also an isotropic state-variable is needed, as is the main message of this study.
### 1.3 Overview
The paper continues with the simulation method in Sect. 2, before the micromechanical particle- and contact-scale observations are presented in Sect. 3, providing analytical (quantitative) constitutive expressions for the change of the jamming density with different modes of deformation. Section 4 is dedicated to a (qualitative) meso-scale stochastic model that explains the different (slow versus fast) change of $$\phi _J(H)$$ for different deformation modes (isotropic versus deviatoric/shear). A quantitative predictive macroscale model is presented in Sect. 5 and verified by comparison with the microscale simulations, before an experimental validation procedure is discussed in Sect. 6 and the paper is summarized and conclusions are given in Sect. 7.
## 2 Simulation method
Discrete Element Method (DEM) simulations are used to model the deformation behavior of systems with $$N = 9261$$ soft frictionless spherical particles with average radius $$\langle r \rangle = 1$$ (mm), density $$\rho = 2000$$ (kg/m3), and a uniform polydispersity width $$w = r_{\mathrm {max}}/r_{\mathrm {min}}= 3$$, using the linear visco-elastic contact model in a 3D box with periodic boundaries [44, 69]. The particle stiffness is $$k = 10^8$$ (kg/s2), contact viscosity is $$\gamma = 1$$ (kg/s). A background dissipation force proportional to the moving velocity is added with $$\gamma _b= 0.1$$ (kg/s). The particle density is $$\rho = 2000$$ (kg/m3). The smallest time of contact is $$t_c = 0.2279$$ ($$\upmu$$s) for a collision between two smallest sized particles [41].
### 2.1 Preparation procedure and main experiments
For the preparation, the particles are generated with random velocities at volume (solid) fraction $$\phi =0.3$$ and are isotropically compressed to $$\phi _t=0.64$$, and later relaxed. From such a relaxed, unjammed, stress free initial state with volume fraction, $$\phi _t= 0.64 < \phi _J$$, we compress isotropically further to a maximum volume fraction, $$\phi ^{\mathrm {max}}_i$$, and decompress back to $$\phi _t$$, during the latter unloading $$\phi _J$$ is identified. This process is repeated over M (100) cycles, which provides different isotropic jamming densities (points) $$\phi _J=:{}^M\phi _{J,i}$$, related with $$\phi ^{\mathrm {max}}_i$$ and M (see Sect. 3.1).
Several isotropic configurations $$\phi$$, such that $$\phi _t< \phi < {}^1\phi _{J,i}$$ from the decompression branch are chosen as the initial configurations for shear experiments. We relax them and apply pure (volume conserving) shear (plane-strain) with the diagonal strain-rate tensor $${{\dot{\varvec{\mathrm E}}}}= \pm {\dot{\epsilon }}_\mathrm {d} \left( -1,1,0\right)$$, for four cycles.2 The x and y walls move, while the z wall remain stationary. The strain rate of the (quasi-static) deformation is small, $${\dot{\epsilon }}_\mathrm {d} t_c < 3 \times 10^{-6}$$, to minimize transient behavior and dynamic effects.3
### 2.2 Macroscopic (tensorial) quantities
Here, we focus on defining averaged tensorial macroscopic quantities—including strain-, stress- and fabric (structure) tensors—that provide information about the state of the packing and reveal interesting bulk features.
From DEM simulations, one can measure the ‘static’ stress in the system [97] as
\begin{aligned} \varvec{\sigma }=\left( {1}/{V}\right) \sum _{c\in V}\mathbf {l}^{c}\otimes \mathbf {f}^{c}, \end{aligned}
(1)
average over all the contacts in the volume V of the dyadic products between the contact force $$\mathbf {f}^{c}$$ and the branch vector $$\mathbf {l}^{c}$$, where the contribution of the kinetic fluctuation energy has been neglected [41, 93]. The dynamic component of the stress tensor is four orders of magnitude smaller than the former and hence its contribution is neglected. The isotropic component of the stress is the pressure $$P= \mathrm {tr}(\varvec{\sigma })/3$$.
In order to characterize the geometry/structure of the static aggregate at microscopic level, we will measure the fabric tensor, defined as
\begin{aligned} \mathbf {F}=\frac{1}{V}\sum _{{\mathcal {P}}\in V}V^{{\mathcal {P}}}\sum _{c\in {\mathcal {P}}}\mathbf {n}^{c}\otimes \mathbf {n}^{c}, \end{aligned}
(2)
where $$V^{\mathcal {P}}$$ is the volume relative to particle $${\mathcal {P}}$$, which lies inside the averaging volume V, and $$\mathbf {n}^{c}$$ is the normal unit branch-vector pointing from center of particle $${\mathcal {P}}$$ to contact c [93, 98, 99]. Isotropic part of fabric is $$F_\mathrm {v}= \mathrm {tr}(\mathbf {F})$$. The corrected coordination number [7, 41] is $$C^{*}= {M_4}/{N_4},$$ where, $$M_4$$ is total contacts of the $$N_4$$ particles having at least 4 contacts, and the non-rattler fraction is $$f_\mathrm {NR}= N_4/N$$. C is the ratio of total non-rattler contacts $$M_4$$ and total number of particles N, i.e., $$C=M_4/N = \left( M_4/N_4\right) \left( N_4/N\right) = C^{*}f_\mathrm {NR}$$, with corrected coordination number $$C^{*}$$ and fraction of non-rattlers $$f_\mathrm {NR}$$. The isotropic fabric $$F_\mathrm {v}$$ is given by the relation $$F_\mathrm {v}= g_3 \phi C$$, as taken from Imole et al. [41], with $$g_3\cong 1.22$$ for the polydispersity used in the present work. For any tensor $${\mathrm {\varvec{\mathrm Q}}}$$, its deviatoric part can be defined as $$Q_\mathrm {d} = \mathrm {sgn}\left( q_{yy} - q_{xx}\right) \sqrt{ 3 q_{ij}q_{ij}/2}$$, where $$q_{ij}$$ are the components of the deviator of $${\mathrm {\varvec{\mathrm Q}}}$$, and the sign function accounts for the shear direction, in the system considered here, where a more general formulation is given in Ref. [69]. Both pressure $$P$$ and shear stress $$\varGamma$$ are non-dimensionalized by $${2\langle r \rangle }/{k}$$ to give dimensionless pressure p and shear stress $$\uptau$$.
Table 1
Parameters used in Eq. (3) and Eqs. (911), where ‘*’ represents slightly different values than from Imole et al. [41], modified slightly to have more simple numbers, without big deviation, and without loss of generality
Quantity
Isotropic
Shear
p
$$p_0=0.042$$; $$\gamma _p=0\pm 0.1$$*
$$p_0=0.042$$; $$\gamma _p=0\pm 0.1$$*
$$C^{*}$$
$$C_1=8.5 \pm 0.3$$*; $$\theta =0.58$$
$$C_1=8.5 \pm 0.3$$*; $$\theta =0.58$$
$$f_\mathrm {NR}$$
$$\varphi _c=0.13$$; $$\varphi _v=15$$
$$\varphi _c=0.16$$; $$\varphi _v=15$$
## 3 Micromechanical results
### 3.1 Isotropic deformation
In this section, we present a procedure to identify the jamming densities and their range. We also show the effect of cyclic over-compression to different target volume fractions and present a model that captures this phenomena.
#### 3.1.1 Identification of the jamming density
When a sample is over-compressed isotropically, the loading and unloading paths are different in pressure $$p$$. This difference is most pronounced near the jamming density $$\phi _J$$, and for the first cycle. It brings up the first question of how to identify a jamming density, $$\phi _J$$. The unloading branch of a cyclic isotropic over-compression along volume fraction $$\phi$$ is well described by a linear relation in volumetric strain, with a tiny quadratic correction [44, 100, 101]:
\begin{aligned} p=\frac{\phi C}{\phi _J}p_0 (-\varepsilon _\mathrm {v}) \left[ 1-\gamma _p (-\varepsilon _\mathrm {v}) \right] , \end{aligned}
(3)
where $$p_0$$, $$\gamma _p$$, as presented in Table 1, and the jamming density $$\phi _J$$ are the fit parameters, and $$-\varepsilon _\mathrm {v}=\log (\phi /\phi _J)$$ is the true or logarithmic volumetric strain of the system, defined relative to the reference where $$p\rightarrow 0$$, i.e. the jamming volume fraction.
Equation (3), quantifies the scaled stress and is proportional to the dimensionless deformation (overlap per particle size), as derived analytically [100] from the definition of stress and converges to $$p\rightarrow 0$$ when $$\phi \rightarrow \phi _J$$.
We apply the same procedure for different over-compressions, $$\phi ^{\mathrm {max}}_i$$, and many subsequent cycles M to obtain $${}^M\phi _{J,i}$$, for which the results are discussed below. The material parameter $$p_0$$ is finite, almost constant, whereas $$\gamma _p$$ is small, sensitive to history and contributes mainly for large $$-\varepsilon _\mathrm {v}$$, with values ranging around $$0\pm 0.1$$; in particular, it is dependent on the over-compression $$\phi ^{\mathrm {max}}_i$$ (data not shown). Unless strictly mentioned, we shall be using the values of $$p_0$$ and $$\gamma _p$$ given in Table 1.
Figure 1a shows the behavior of $$p$$ with $$\phi$$ during one full over-compression cycle to display the dependence of the jamming density on the maximum over-compression volume fraction and the number of cycles. With increasing over-compression amplitude, e.g. comparing $$\phi ^{\mathrm {max}}_i = 0.68$$ and $$\phi ^{\mathrm {max}}_i = 0.82$$, the jamming density, as realized after unloading, is increasing. Also, with each cycle, from $$M=1$$ to $$M=100$$, the jamming density moves to larger values. Note that the difference between the loading and the unloading curves becomes smaller for subsequent over-compressions. Fig. 1b shows the scaled pressure, i.e., $$p$$ normalized by $$\phi C/\phi _J$$, which removes its non-linear behavior. $$p$$ represents the average deformation (overlap) of the particles at a given volume fraction, proportional to the distance from the jamming density $$\phi _J$$.4 In the small strain region, for all over-compression amplitude and cycles, the datasets collapse on a line with slope $$p_0\sim 0.042$$. Only for very strong over-compression, $$-\varepsilon _\mathrm {v}>0.1$$, a small deviation (from linear) of the simulation data is observed due to the tiny quadratic correction in Eq. (3).
#### 3.1.2 Isotropic cyclic over-compression
Many different isotropic jamming densities can be found in real systems and—as shown here—also for the simplest model material in 3D [64]. Figure 2a shows the evolution of these extracted isotropic jamming densities $${}^M\phi _{J,i}$$, which increase with increasing M and with over-compression $$\phi ^{\mathrm {max}}_i$$, for subsequent cycles M of over-compressions, the jamming density $${}^M\phi _{J,i}$$ grows slower and slower and is best captured by a Kohlrausch-Williams-Watts (KWW) stretched exponential relation:
\begin{aligned} \begin{aligned} {}^M\phi _{J,i}:&= \phi _J(\phi ^{\mathrm {max}}_i, M) \\&= {}^{\infty }\phi _{J,i}- \left( {}^{\infty }\phi _{J,i}- \phi _{c}\right) \exp \left[ {-\left( {M}/{\mu _i}\right) ^{\beta _i}}\right] , \end{aligned} \end{aligned}
(4)
with the three universal “material”-constants $$\phi _{c}= 0.6567$$ (Sect. 3.2.2), $$\mu _i =1$$, and $$\beta _i=0.3$$, the lower limit of possible $$\phi _J$$’s, the relaxation (cycle) scale and the stretched exponent parameters, respectively. Only $${}^{\infty }\phi _{J,i}$$, the equilibrium (steady-state or shakedown [102]) jamming density limit (extrapolated for $$M \rightarrow \infty$$), depends on the over-compressions $$\phi ^{\mathrm {max}}_i$$. $$\phi _{c}$$ is the critical density in the zero pressure limit without previous history, or after very long shear without temperature (which all are impossible to realize with experiments or simulation–only maybe with energy minimization).
Very little over-compression, $$\phi ^{\mathrm {max}}_i \gtrsim \phi _{c}$$, does not lead to a significant increase in $$\phi _{J,i}$$, giving us information about the lower limits of the isotropic jamming densities achievable by shear, which is the critical jamming density $$\phi _{c}= 0.6567$$. With each over-compression cycle, $${}^M\phi _{J,i}$$ increases, but for larger M, it increases less and less. This is analogous to compaction by tapping, where the tapped density increases logarithmically slow with the number of taps. The limit value $${}^{\infty }\phi _{J,i}$$ with $$\phi ^{\mathrm {max}}_i$$ can be fitted with a simple power law relation:
\begin{aligned} {}^{\infty }\phi _{J,i}= \phi _{c}+ \alpha _\mathrm{max} \left( \phi ^{\mathrm {max}}_i / \phi _{c}- 1 \right) ^{\beta } , \end{aligned}
(5)
where the fit works perfect for $$\phi _{c}< \phi ^{\mathrm {max}}_i \le 0.9$$, with parameters $$\phi _{c}=0.6567$$, $$\alpha _\mathrm{max}=0.02\pm 2\,\%$$, and $$\beta =0.3$$, while the few points for $$\phi ^{\mathrm {max}}_i \sim \phi _{c}$$ are not well captured. The relation between the limit-value $${}^{\infty }\phi _{J,i}$$ and $${}^1\phi _{J,i}$$ is derived using Eq. (4):
\begin{aligned} {}^{\infty }\phi _{J,i}-\phi _{c}= \frac{{}^1\phi _{J,i}-\phi _{c}}{1-e^{-1}} \cong 1.58 \left( {{}^1\phi _{J,i}-\phi _{c}} \right) , \end{aligned}
(6)
only by setting $$M=1$$, as shown in Fig. 2b, with perfect match. With other words, using a single over-compression, Eq. (6) allows to predict the limit value after first over-compression $${}^1\phi _{J,i}$$ (or subsequent over-compression cycles, using appropriate M).
Thus, the isotropic jamming density $$\phi _J$$ is not a unique point, not even for frictionless particle systems, and is dependent on the previous deformation history of the system [63, 82, 103], e.g. over-compression or tapping/driving (data not shown). Both (isotropic) modes of deformation lead to more compact, better packed configurations [7, 47, 104]. Considering different system sizes, and different preparation procedures, we confirm that the jamming regime is the same (within fluctuations) for all the cases considered (not shown). All our data so far, for the material used, are consistent with a unique limit density $$\phi _{c}$$ that is reached after large strain, very slow shear, in the limit of vanishing confining pressure. Unfortunately this limit is vaguely defined, since it is not directly accessible, but rather corresponds to a virtual stress-free state. The limit density is hard to determine experimentally and numerically as well. Reason is that any slow deformation (e.g. compression from below jamming) also leads to perturbations (like tapping leads to granular temperature): the stronger the system is perturbed, the better it will pack, so that usually $$\phi _J > \phi _{c}$$ is established. Repeated perturbations lead to a slow stretched exponential approach to an upper-limit jamming density $$\phi _J\rightarrow \phi _J^{\mathrm {max}}$$ that itself increases slowly with perturbation amplitude, see Fig. 2b. The observation of different $$\phi _J$$ of a single material, was referred to as J-segment [63, 103], and requires an alternative interpretation of the classical “jamming diagram” [5, 7, 66], giving up the misconception of a single, constant jamming “density”. Note that the J-segment is not just due to fluctuations, but it is due to the deformation history, and with fluctuations superposed. The state-variable $$\phi _J$$ varies due to deformation, but possibly has a unique limit value that we denote for now as $$\phi _{c}$$. Jammed states below $$\phi _{c}$$ might be possible too, but require different protocols [105], or different materials, and are thus not addressed here. Next, we discuss the concept of shear jammed states [7] below $$\phi _J$$.
### 3.2 Shear deformation
To study shear jamming, we choose several unjammed states with volume fractions $$\phi$$ below their jamming densities $${}^1\phi _{J,i}$$, which were established after the first compression-decompression cycle, for different history, i.e., various previously applied over-compression to $$\phi ^{\mathrm {max}}_i$$. Each configuration is first relaxed and then subjected to four isochoric (volume conserving) pure shear cycles (see Sect. 2.1).
#### 3.2.1 Shear jamming below $$\phi _J(H)$$
We confirm shear jamming, e.g., by a transition in the coordination number $$C^{*}$$, from below to above its isostatic limit, $$C^*_0= 6$$, for frictionless grains [13, 31, 38, 41]. This was consistently (independently) reconfirmed by using percolation analysis [7, 30], allowing us to distinguish the three different regimes namely, unjammed, fragile and shear jammed states during (and after) shear [66], as shown in Fig. 3a. We study how the $$k-$$cluster, defined as the largest force network, connecting strong forces, $$f \ge k f_\mathrm {avg}$$ [109, 110], with $$k=2.2$$, different from $$k=1$$ for 2D frictional systems [7], percolates when the initially unjammed isotropic system is sheared. More quantitatively, for an exemplary volume fraction $$\phi \left( \phi ^{\mathrm {max}}_i = 0.82, M=1\right) = 0.6584$$, very close to $$\phi _{c}$$, Fig. 3b shows that $$f_\mathrm {NR}$$ increases from initially zero to large values well below unity due to the always existing rattlers. The compressive direction percolating network $$\xi _y/L_y$$ grows faster than the extension direction network $$\xi _x/L_x$$, while the network in the non-mobile direction, $$\xi _z/L_z$$, lies in between them. For $$f_\mathrm {NR}>0.82\pm 0.01$$, we observe that the growing force network is percolated in all three directions (Fig. 3a), which is astonishingly similar to the value reported for the 2D systems [7]. The jamming by shear of the material corresponds (independently) to the crossing of $$C^{*}$$ from the isostatic limit of $$C^*_0=6$$, as presented in Fig. 3b.
From this perspective, when an unjammed material is sheared at constant volume, and it jams after application of sufficient shear strain, clearly showing that the jamming density has moved to a lower value. Shearing the system also perturbs it, just like over-compression; however, in addition, finite shear strains enforce shape- and structure-changes and thus allow the system to explore new configurations; typically, the elevated jamming density $$\phi _J$$ of a previously compacted system will rapidly decrease and exponentially approach its lower-limit, the critical jamming density $$\phi _{c}$$, below which no shear jamming exists. Note that we do not exclude the possibility that jammed states below $$\phi _{c}$$ could be achieved by other, special, careful preparation procedures [111].
Next, we present the evolution of the strong force networks in each direction during cyclic shear, as shown in Fig. 4, for the same initial system. After the first loading, at reversal $$f_\mathrm {NR}$$ drops below the 0.82 threshold, which indicates the breakage/disappearance of strong clusters, i.e. the system unjams. The new extension direction $$\xi _y/L_y$$ drops first with the network in the non-mobile directions, $$\xi _z/L_z$$, lying again in between the two mobile direction. With further applied strains, $$f_\mathrm {NR}$$ increases and again, the cluster associated with the compression direction grows faster than in the extension direction. For $$f_\mathrm {NR}$$ above the threshold, the cluster percolates the full system, leading to shear jammed states again. At each reversal, the strong force network breaks/fails in all directions, and the system gets “soft” or even unjams temporarily. However, the network is rapidly re-established in the perpendicular direction, i.e., the system jams and the strong, anisotropic force network again sustains the load. Note that some systems with volume fraction higher and away from $$\phi _{c}$$ can resist shear strain reversal as described and modeled in Sect. 5.1.3.
#### 3.2.2 Relaxation effects on shear jammed states
Here, we will discuss the system stability by looking at the macroscopic quantities in the saturation state (after large shear strain), by relaxing them sufficiently long to have non-fluctuating values in the microscopic and macroscopic quantities. Every shear cycle after defining e.g. the $$y-$$direction as the initial active loading direction, has two saturation states, one during loading and, after reversal, the other during unloading. In Fig. 5, we show values attained by the isotropic quantities pressure $$p$$, isotropic fabric $$F_\mathrm {v}$$ and the deviatoric quantities shear stress $$\uptau$$, shear stress ratio $$\uptau /p$$, and deviatoric fabric $$F_\mathrm {d}$$ for various $$\phi$$ given the same initial jamming density $$\phi _J\left( \phi ^{\mathrm {max}}_i = 0.82, M=1\right) =: {}^1\phi _{J,i}= 0.6652$$. Data are shown during cyclic shear as well as at the two relaxed saturation states (averaged over four cycles), leading to following observations:
1. (i)
With increasing volume fraction, $$p$$, $$F_\mathrm {v}$$ and $$\uptau$$ increase, while a weak decreasing trend in stress ratio $$\uptau /p$$ and deviatoric fabric $$F_\mathrm {d}$$ is observed.
2. (ii)
There is almost no difference in the relaxed states in isotropic quantities, $$p$$ and $$F_\mathrm {v}$$ for the two directions, whereas it is symmetric about zero for deviatoric quantities, $$\uptau$$, $$\uptau /p$$, and $$F_\mathrm {d}$$. The decrease in pressure during relaxation is associated with dissipation of kinetic energy and partial opening of the contacts to “dissipate” the related part of the contact potential energy. However, $$F_\mathrm {v}$$ remains at its peak value during relaxation. It is shown in Sect. 2.2 that $$F_\mathrm {v}= g_3 \phi C$$, as taken from Imole et al. [41], with $$g_3\cong 1.22$$ for the polydispersity used in the present work. Thus we conclude that the contact structure is almost unchanged and the network remains stable during relaxation, since during relaxation $$\phi$$ does not change.
3. (iii)
For small volume fractions, close to $$\phi _{c}$$, the system becomes strongly anisotropic in stress ratio $$\uptau /p$$, and fabric $$F_\mathrm {d}$$ rather quickly, during (slow) shear (envelope for low volume fractions in Fig. 5d, e), before it reaches the steady state [49].
4. (iv)
It is easy to obtain the critical (shear) jamming density $$\phi _{c}$$ from the relaxed critical (steady) state pressure $$p$$, and shear stress $$\uptau$$, by extrapolation to zero, as the envelope of relaxed data in Fig. 5a, c.
We use the same methodology presented in Eq. (3) to extract the critical jamming density $$\phi _{c}$$. When the relaxed $$p$$ is normalized with the contact density $$\phi C$$, we obtain $$\phi _{c}= 0.6567 \pm 0.0005$$ by linear extrapolation. A similar value of $$\phi _{c}$$ is obtained from the extrapolation of the relaxed $$\uptau$$ data set, and is consistent with other methods using the coordination number $$C^{*}$$, or the energy [112]. The quantification of history dependent jamming densities $$\phi _J(H)$$, due to shear complementing the slow changes by cyclic isotropic (over)compression in Eq. (4), is discussed next.
### 3.3 Jamming phase diagram with history H
We propose a jamming phase diagram with shear strain, and present a new, quantitative history dependent model that explains jamming and shear jamming, but also predicts that shear jamming vanishes under some conditions, namely when the system is not tapped, tempered or over-compressed before shear is applied. Using $${\varepsilon }_{d}$$ and $$\phi$$ as parameters, Fig. 6a shows that for one initial the history dependent jamming state at $${}^1\phi _{J,i}$$, there exist sheared states within the range $$\phi _{c}\le \phi \le \phi _J(H)$$, which are isotropically unjammed. After small shear strain they become fragile, and for larger shear strain jam and remain jammed, i.e., eventually showing the critical state flow regime [45, 46], where pressure, shear stress ratio and structural anisotropy have reached their saturation levels and forgotten their initial state (data not shown). The transition to fragile states is accompanied by partial percolation of the strong force network, while percolation in all directions indicates the shear jamming transition. Above jamming, the large fraction of non-rattlers provides a persistent mechanical stability to the structure, even after shear is stopped.
For $$\phi$$ approaching $$\phi _{c}$$, the required shear strain to jam $${\varepsilon }_{d}^{SJ}$$ increases, i.e., there exists a divergence “point” $$\phi _{c}$$, where ‘infinite’ shear strain might jam the system, but below which no shear jamming was observed. The closer the (constant) volume fraction $$\phi$$ is to the initial $${}^1\phi _{J,i}$$, the smaller is $${\varepsilon }_{d}^{SJ}$$. States with $$\phi \ge {}^1\phi _{J,i}$$ are isotropically jammed already before shear is applied.
Based on the study of many systems, prepared via isotropic over-compression to a wide range of volume fractions $$\phi ^{\mathrm {max}}_i \ge \phi _{c}$$, and subsequent shear deformation, Fig. 6b shows the strains required to jam these states by applying pure shear. A striking observation is that independent of the isotropic jamming density $${}^1\phi _{J,i}$$, all curves approach a unique critical jamming density at $$\phi _{c}\sim 0.6567$$ (see Sect. 3.2.2). When all the curves are scaled with their original isotropic jamming density $${}^M\phi _{J,i}$$ as $$\phi _{sc}= \left( \phi -\phi _{c}\right) /\left( {}^M\phi _{J,i}-\phi _{c}\right)$$ they collapse on a unique master curve
\begin{aligned} \left( {\varepsilon }_{d}^{SJ}/{\varepsilon }_{d}^{0}\right) ^{\alpha } ={-} \log {\phi _{sc}} = {-} \log {\left( \frac{\phi -\phi _{c}}{{}^M\phi _{J,i}-\phi _{c}}\right) }, \end{aligned}
(7)
shown in the inset of Fig. 6b, with power $$\alpha =1.37 \pm 0.01$$ and shear strain scale $${\varepsilon }_{d}^{0}=0.102 \pm 0.001$$ as the fit parameters. Hence, if the initial jamming density $${}^M\phi _{J,i}$$ or $$\phi _J(H)$$ is known based on the past history of the sample, the shear jamming strain $${\varepsilon }_{d}^{SJ}$$ can be predicted.
From the measured shear jamming strain, Eq. (7), knowing the initial and the limit value of $$\phi _J$$, we now postulate its evolution under isochoric pure shear strain:
\begin{aligned} \phi _J({\varepsilon }_{d}) = \phi _{c}+ \left( \phi - \phi _{c}\right) \exp \left[ \left( \frac{ \left( {\varepsilon }_{d}^{SJ}\right) ^\alpha - \left( {\varepsilon }_{d}\right) ^\alpha }{ \left( {\varepsilon }_{d}^{0}\right) ^\alpha }\right) \right] . \end{aligned}
(8)
Inserting, $${\varepsilon }_{d}=0$$, $${\varepsilon }_{d}={\varepsilon }_{d}^{SJ}$$ and $${\varepsilon }_{d}=\infty$$ leads to $$\phi _J= {}^M\phi _{J,i}$$, $$\phi _J= \phi$$ and $$\phi _J= \phi _{c}$$, respectively. The jamming density evolution due to shear strain $${\varepsilon }_{d}$$ is faster than exponential (since $$\alpha > 1$$) decreasing to its lower limit $$\phi _{c}$$. This is qualitatively different from the stretched exponential (slow) relaxation dynamics that leads to the increase of $$\phi _J$$ due to over-compression or tapping, see Fig. 7a for both cases.
## 4 Meso-scale stochastic slow dynamics model
The last challenge is to unify the observations in a qualitative model that accounts for the changes in the jamming densities for both isotropic and shear deformation modes. Over-compressing a soft granular assembly is analogous to small-amplitude tapping [21, 47, 104] of more rigid particles, in so far that both methods lead to more compact (efficient) packing structures, i.e., both representing more isotropic perturbations, rather than shear, which is deviatoric (anisotropic) in nature. These changes are shown in Fig. 2a, where the originally reported logarithmically slow dynamics for tapping [107, 108, 113] is very similar to our results that are also very slow, with a stretched exponential behavior; such slow relaxation dynamics can be explained by a simple Sinai-Diffusion model of random walkers in a random, hierarchical, fractal, free energy landscape [106, 114] in the (a-thermal) limit, where the landscape does not change—for the sake of simplicity.
The granular packing is represented in this picture by an ensemble of random walkers in (arbitrary) configuration space with (potential) energy according to the height of their position on the landscape. (Their average energy corresponds to the jamming density and a decrease in energy corresponds to an increase in $$\phi _J(H)$$, thus representing the “memory” and history dependence with protocol H.) Each change of the ensemble represents a rearrangement of packing and units in ensemble represent sub-systems. Perturbations, such as tapping with some small-amplitude (corresponding to “temperature”) allow the ensemble to find denser configurations, i.e., deeper valleys in the landscape, representing larger (jamming) densities [22, 82]. Similarly, over-compression is squeezing the ensemble “down-hill”, also leading to an increase of $$\phi _J$$, as presented in Fig. 7b. Larger amplitudes will allow the ensemble to overcome larger barriers and thus find even deeper valleys. Repetitions have a smaller chance to do so—since the easy reorganizations have been realized previously—which explains the slow dynamics in the hierarchical multiscale structure of the energy landscape.
In contrast to the isotropic perturbations, where the random walkers follow the “down-hill” trend, shear is anisotropic and thus pushing parts of the ensemble in “up-hill’ direction’. For example, under planar simple shear, one (eigen) direction is extensive (up-hill) whereas an other is compressive (down-hill). If the ensemble is random, shear will only re-shuffle the population. But if the material was previously forced or relaxed towards the (local) land-scape minima, shear can only lead to a net up-hill drift of the ensemble, i.e., to decreasing $$\phi _J$$, referred to as dilatancy under constant stress boundary conditions.
For ongoing over-compression, both coordination number and pressure slowly increase, as sketched in Fig. 8, while the jamming density drifts to larger values due to re-organization events that make the packing more effective, which moves the state-line to the right (also shown in Fig. 7a). For decompression, we assume that there are much less re-organization events happening, so that the pressure moves down on the state-line, until the system unjams. For ongoing perturbations, at constant volume, as tapping or a finite temperature, $$T_g$$, both coordination number and pressure slowly decrease (data not shown), whereas for fixed confining pressure the volume would decrease (compactancy, also not shown).
For ongoing shear, the coordination number, the pressure and the shear stress increase, since the jamming density decreases, as sketched in Fig. 9 until a steady state is reached. This process is driven by shear strain amplitude and is much faster than the relaxation dynamics. For large enough strain the system will be sufficiently re-shuffled, randomized, or “re-juvenated” such that it approaches its quenched, random state close to $$\phi _{c}$$ (see Fig. 7a).
If both mechanisms, relaxation by temperature, and continuous shear are occurring at the same time, one can reach another (non)-“equilibrium” steady state, where the jamming density remains constant, balancing the respective increasing and decreasing trends, as sketched in Fig. 9e.
## 5 Macroscopic constitutive model
In this section, we present the simplest model equations, as used for the predictions, involving a history dependent $$\phi _J(H)$$, as given by Eq. (4) for isotropic deformations and Eq. (8) for shear deformations. The only difference to Imole et al. [41], where these relations are taken from, based on purely isotropic unloading, is the variable $$\phi _J= \phi _J(H)$$.
### 5.1 Presentation and model calibration
#### 5.1.1 During cyclic isotropic deformation
During (cyclic) isotropic deformation, the evolution equation for the corrected coordination number $$C^{*}$$ is:
\begin{aligned} C^{*}= C_0 + C_1\left( \frac{\phi }{\phi _J(H)}-1 \right) ^\theta , \end{aligned}
(9)
with $$C_0=6$$ for the frictionless case and parameters $$C_1$$ and $$\theta$$ are presented in Table 1. The fraction of non-rattlers $$f_\mathrm {NR}$$ is given as:
\begin{aligned} f_\mathrm {NR}= 1 - \varphi _c\mathrm {exp}\left[ -\varphi _v \left( \frac{\phi }{\phi _J(H)}-1 \right) \right] , \end{aligned}
(10)
with parameters $$\varphi _c$$ and $$\varphi _v$$ presented in Table 1. We modify Eq. (3) for the evolution of $$p$$ together with the history dependent $$\phi _J=\phi _J(H)$$ so that,
\begin{aligned} p=\frac{\phi C}{\phi _J(H)}p_0 (-\varepsilon _\mathrm {v}) \left[ 1-\gamma _p (-\varepsilon _\mathrm {v}) \right] , \end{aligned}
(11)
with parameters $$p_0$$ and $$\gamma _p$$ presented in Table 1, and the true or logarithmic volume change of the system is $$-\varepsilon _\mathrm {v} = \log (\phi /\phi _J(H))$$, relative to the momentary jamming density. The non-corrected coordination number is $$C = C^{*}f_\mathrm {NR}$$, as can be computed using Eqs. (9) and (10). Also the parameters $$C_1$$, $$\theta$$ for $$C^{*}$$, $$\varphi _c$$, $$\varphi _v$$ for $$f_\mathrm {NR}$$, and $$p_0$$, $$\gamma _p$$ for pressure p are similar to Imole et al. [41], with the second order correction parameter $$\gamma _p$$ most sensitive to the details of previous deformations; however, not being very relevant since it is always a small correction due to the product $$\gamma _p (-\varepsilon _\mathrm {v})$$.
The above relations are used to predict the behavior of the isotropic quantities: dimensionless pressure $$p$$ and coordination number $$C^{*}$$, during cyclic isotropic compression, as well as for the fraction of non-rattlers for cyclic shear, with corresponding parameters presented in Table 1. Note that during isotropic deformation, $$\phi _J(H)$$ was changed only during the compression branch, using Eq. (4) for fixed $$M=1$$ using $$\phi ^{\mathrm {max}}_i$$ as variable, but is kept constant during unloading/expansion.
The above relations are used to predict the behavior of the isotropic quantities: dimensionless pressure $$p$$ and coordination number $$C^{*}$$, by only adding the history dependent jamming density $$\phi _J(H)$$ to the constitutive model, as tested below in Sect. 5.2.
#### 5.1.2 Cyclic (pure) shear deformation
During cyclic (pure) shear deformation, a simplified equation for the shear stress ratio $$\uptau /p$$ is taken from Imole et al. [41], where the full model was introduced as rate-type evolution equations, and further calibrated and tested by Kumar et al. [69]:
\begin{aligned} \uptau /p= {\left( \uptau /p\right) }^{\mathrm {max}} - \left[ {\left( \uptau /p\right) }^{\mathrm {max}} - \left( \uptau /p\right) ^{\mathrm {0}} \right] \exp \left[ -\beta _s {\varepsilon }_{d}\right] ,\nonumber \\ \end{aligned}
(12)
with $$\left( \uptau /p\right) ^{\mathrm {0}}$$ and $$\left( \uptau /p\right) ^{\mathrm {max}}$$ the initial and maximum (saturation) shear stress ratio, respectively, and $$\beta _s$$ its growth rate.5 Similarly, a simplified equation for the deviatoric fabric $$F_\mathrm {d}$$ can be taken from Refs. [41, 69] as:
\begin{aligned} F_\mathrm {d}= {F_\mathrm {d}}^{\mathrm {max}} - \left[ {F_\mathrm {d}}^{\mathrm {max}} - {F_\mathrm {d}}^{\mathrm {0}} \right] \exp \left[ -\beta _F {\varepsilon }_{d}\right] , \end{aligned}
(13)
with $${F_\mathrm {d}}^{\mathrm {0}}$$ and $${F_\mathrm {d}}^{\mathrm {max}}$$ the initial and maximum (saturation) values of the deviatoric fabric, respectively, and $$\beta _F$$ its growth rate. The four parameters $$\left( \uptau /p\right) ^{\mathrm {max}}$$, $$\beta _s$$ for $$\uptau /p$$ and $${F_\mathrm {d}}^{\mathrm {max}}$$, $$\beta _F$$ for $$F_\mathrm {d}$$ are dependent on the volume fraction $$\phi$$ and are well described by the general relation from Imole et al. [41] as:
\begin{aligned} Q = Q_a + Q_c \exp \left[ -\varPsi \left( \frac{\phi }{\phi _J(H)}-1 \right) \right] , \end{aligned}
(14)
where $$Q_a$$, $$Q_c$$ and $$\varPsi$$ are the fitting constants with values presented in Table 2.
For predictions during cyclic shear deformation, $$\phi _J(H)$$ was changed with applied shear strain $${\varepsilon }_{d}$$ using Eq. (8). Furthermore, the jamming density is set to a larger value just after strain-reversal, as discussed next.
Table 2
Parameters for Eqs. (12) and (13) using Eq. (14), with slightly different values than from Imole et al. [41], that are extracted using the similar procedure as in Imole et al. [41], for states with volume fraction close to the jamming volume fraction
Evolution parameters
$$Q_{a}$$
$$Q_{c}$$
$$\varPsi$$
$$\left( \uptau /p\right) ^{\mathrm {max}}$$
0.12
0.091
7.9
$$\beta _s$$
30
40
16
$${F_\mathrm {d}}^{\mathrm {max}}$$
0
0.17
5.3
$$\beta _F$$
0
40
5.3
#### 5.1.3 Behavior of the jamming density at strain reversal
As mentioned in Sect. 3.2, there are some states below $$\phi _J$$, where application of shear strain jams the systems. The densest of those can resist shear reversal, but below a certain $$\phi _\mathrm {cr}\approx 0.662<\phi _J$$, shear reversal unjams the system again [116]. With this information, we postulate the following:
1. (i)
After the first phase, for large strain pure shear, the system should forget where it was isotropically compressed to before i.e., $${}^M\phi _{J,i}$$ is forgotten and $$\phi _J=\phi _{c}$$ is realized.
2. (ii)
There exists a volume fraction $$\phi _\mathrm {cr}$$, above which the systems can just resist shear reversal and remain always jammed in both forward and reverse shear.
3. (iii)
Below this $$\phi _\mathrm {cr}$$, reversal unjams the system. Therefore, more strain is needed to jam the system (when compared to the initial loading), first to forget its state before reversal, and then to re-jam it in opposite (perpendicular) shear direction. Hence, the strain necessary to jam in reversal direction should be higher than for the first shear cycle.
4. (iv)
As we approach $$\phi _{c}$$, the reverse strain needed to jam the system increases.
We use these ideas and measure the reversal shear strain $${\varepsilon }_{d}^{SJ,R}$$, needed to re-jam the states below $$\phi _\mathrm {cr}$$, as shown in Fig. 10. When they are scaled with $$\phi _\mathrm {cr}$$ as $$\phi _{sc}= \left( \phi -\phi _{c}\right) /\left( \phi _\mathrm {cr}-\phi _{c}\right)$$, they collapse on a unique master curve, very similar to Eq. (7):
\begin{aligned} \left( {\varepsilon }_{d}^{SJ,R}/{\varepsilon }_{d}^{0,R}\right) ^{\alpha } ={-} \log {\phi _{sc}} = {-} \log {\left( \frac{\phi -\phi _{c}}{\phi _\mathrm {cr}-\phi _{c}}\right) }, \end{aligned}
(15)
shown in the inset of Fig. 6b, with the same power $$\alpha =1.37 \pm 0.01$$ as Eq. (7). Fit parameter strain scale $${\varepsilon }_{d}^{0,R}=0.17 \pm 0.002 > {\varepsilon }_{d}^{0} = 0.102$$, is consistent with the above postulates (iii) and (iv).
The above relations are used to predict the isotropic and the deviatoric quantities, during cyclic shear deformation, as described next, with the additional rule that all the quantities attain value zero for $$\phi \le \phi _J(H)$$. Moreover, for any state with $$\phi \le \phi _\mathrm {cr}$$, shear strain reversal moves the jamming density to $$\phi _\mathrm {cr}$$, and the evolution of the jamming density follows Eq. (15).
Any other deformation mode, can be written as a unique superposition of pure isotropic and pure and axial shear deformation modes [117]. Hence the combination of the above can be easily used to describe any general deformation, e.g. uniaxial cyclic compression (data not presented) where the axial strain can be decomposed in two plane strain modes.
### 5.2 Prediction: minimal model
Finally, we test the proposed history dependent jamming density $$\phi _J(H)$$ model, by predicting $$p$$ and $$C^{*}$$, when a granular assembly is subjected to cyclic isotropic compression to $$\phi ^{\mathrm {max}}_i = 0.73$$ for $$M=1$$ and for $$M=300$$ cycles, with $${}^{\infty }\phi _{J,i}=0.667$$, as shown in Fig. 11a, b. It is observed that using the history dependence of $$\phi _J(H)$$, the hysteretic behavior of the isotropic quantities, $$p$$ and $$C^{*}$$, is very well predicted, qualitatively similar to isotropic compression and decompression of real 2D frictional granular assemblies, as shown in by Bandi et al. [58] and Reichhardt and Reichhardt [22].
In Fig. 11c, we show the evolution of the deviatoric quantities shear stress ratio $$\uptau /p$$ and deviatoric fabric $$F_\mathrm {d}$$, when a system with $$\phi =0.6584$$, close to $$\phi _{c}$$, and initial jamming density $$\phi _J(0)= 0.6652$$, is subjected to three shear cycles (lowest panel). The shear stress ratio $$\uptau /p$$ is initially undefined, but soon establishes a maximum (not shown) and decays to its saturation level at large strain. After strain reversal, $$\uptau /p$$ drops suddenly and attains the same saturation value, for each half-cycle, only with alternating sign. The behavior of the anisotropic fabric $$F_\mathrm {d}$$ is similar to that of $$\uptau /p$$. During the first loading cycle, the system is unjammed for some strain, and hence $$F_\mathrm {d}$$ is zero in the model (observations in simulations can be non-zero, when the data correspond to only few contacts, mostly coming from rattlers). However, the growth/decay rate and the saturation values attained are different from those of $$\uptau /p$$, implying a different, independent stress- and structure-evolution with strain—which is at the basis of recently proposed anisotropic constitutive models for quasi-static granular flow under various deformation modes [41]. The simple model with $$\phi _J(H)$$, is able to predict quantitatively the behavior the $$\uptau /p$$ and $$F_\mathrm {d}$$ after the first loading path, and is qualitatively close to the cyclic shear behavior of real 2D frictional granular assemblies, as shown in Supplementary Fig. 7 by Bi et al. [7].
At the same time, also the isotropic quantities are very well predicted by the model, using the simple equations from Sect. 5.1, where only the jamming density is varying with shear strain, while all material parameters are kept constant. Some arbitrariness involves the sudden changes of $$\phi _J$$ at reversal, as discussed in Sect. 5.1. Therefore, using a history dependent $$\phi _J(H)$$ gives hope to understand the hysteretic observations from realistic granular assemblies, and also provides a simple explanation of shear jamming. Modifications of continuum models like anisotropic models [41, 69], or GSH type models [85, 86], by including a variable $$\phi _J$$, can this way quantitatively explain various mechanisms around jamming.
## 6 Towards experimental validation
The purpose of this section is two-fold: First, we propose ways to (indirectly) measure the jamming density, since it is a virtual quantity that is hard to measure directly, just as the “virtual, stress-free reference state” in continuum mechanics which it resembles. Second, this way, we will introduce alternative state-variables, since by no means is the jamming density the only possibility.
Measuring $$\phi _J$$ from experiments Here we show the procedure to extract the history dependent jamming density $$\phi _J(H)$$ from measurable quantities, indirectly obtained via Eqs. (9), (10), (11), and directly from Eq. (8). There are two reasons to do so: (i) the jamming density $$\phi _J(H)$$ is only accessible in the unloading limit $$p\rightarrow 0$$, which requires an experiment or a simulation to “measure” it (however, during this measurement, it might change again); (ii) deducing the jamming density from other quantities that are defined for an instantaneous snapshot/configuration for $$p > 0$$ allows to indirectly obtain it—if, as shown next, these indirect “measurements” are compatible/consistent: Showing the equivalence of all the different $$\phi _J(H)$$, proofs the consistency and completeness of the model and, even more important, provides a way to obtain $$\phi _J(H)$$ indirectly from experimentally accessible quantities.
For isotropic compression Figure 12 shows the evolution of $$\phi _J(H)$$, measured from the two experimentally accessible quantities: coordination number $$C^{*}$$ and pressure p, using Eqs. (9) and (11) respectively for isotropic over-compression to $$\phi ^{\mathrm {max}}_i=0.82$$ over two cycles. Following observations can be made: (i) $$\phi _J$$ for isotropic loading and unloading can be extracted from $$C^{*}$$ and p, (ii) it rapidly increases and then saturates during loading, (iii) it mimics the fractal energy landscape model in Fig. 4 from Luding et al. [114] very well, (iv) while is was assumed not to change for unloading, it even increases, which we attribute to the perturbations and fluctuations (granular temperature) induced during the quasi-static deformations, (v) the indirect $$\phi _J$$ are reproducible and follow the same master-curve for first over-compression as seen in Figs. 12, independent of the maximum—all following deformation is dependent on the previous maximum density.
For shear deformation Figure 13 shows the evolution of $$\phi _J(H)$$, measured from the two experimentally accessible quantities: coordination number $$C^{*}$$ and pressure p, using Eqs. (9) and (11) respectively during volume conserving shear with $$\phi =0.66$$, and the initial jamming density $$\phi _J\left( \phi ^{\mathrm {max}}_i = 0.82, M=1\right) =: {}^1\phi _{J,i}= 0.6652 > \phi$$ and shows good agreement with the theoretical predictions using Eq. (8) after shear jamming. Thus the indirect measurements of $$\phi _J(H)$$ can be applied if $$\phi _J(H)<\phi$$; the result deduced from pressure fits the best, i.e., it interpolates the two others and is smoother.
## 7 Summary, discussion and outlook
In summary, this study presents a quantitative, predictive macroscopic constitutive model that unifies a variety of phenomena around and above jamming, for quasi-static deformation modes. The most important ingredient is a scalar state-variable that characterizes the packing “efficiency” and responds very slowly to (isotropic, perturbative) deformation. In contrast, it responds fast, saturating exponentially with finite shear deformation. This different response to the two fundamentally different modes of deformation (isotropic or deviatoric, shear) is (qualitatively) explained by a stochastic (meso-scale) model with fractal (multiscale) character. All simulation results considered here are quantitatively matched by the macroscopic model after including both the isotropic and the anisotropic microstructure as state-variables. Discussing the equivalence of alternative state-variables and ways to experimentally measure the model parameters concludes the study and paves the way to apply the model to other, more realistic materials. The following subsections wrap up some major aspects of this study and also add some partly speculative arguments about the wider consequences of our results for rheology as well as an outlook.
The questions posed in the introduction can now be answered: (i) The transition between the jammed and flowing (unjammed) regimes is controlled by a single new, isotropic, history dependent state-variable, the jamming density $$\phi _J(H)$$ (with history H as shorthand place-holder for any deformation path), which (ii) has a unique lower critical jamming density $$\phi _{c}$$ when $$p \rightarrow 0$$, reached after long shear without temperature $$T_g$$, so that (iii) the history (protocol dependence) of jamming is completely encompassed by this new state-variable, and (iv) jamming, unjamming and shear jamming can all occur in 3D without any friction, only by reorganizations of the micro-structure.
### 7.2 Lower limit of jamming
The multiscale model framework implies now a minimum $$\phi _{c}$$ that represents the (critical) steady state for a given sample in the limit of vanishing confining stress, i.e., the lower limit of all jamming densities. This is nothing but the mean lowest stable random density a sheared system “locally” can reach due to continuously ongoing shear, in the limit of vanishing confining stress.
This lower limit is difficult to access in experiments and simulations, since every shear also perturbs the system leading at the same time to (slow) relaxation and thus a competing increase in $$\phi _J(H)$$. However, it can be obtained from the (relaxed) steady state values of pressure, extrapolated to zero, i.e., from the envelope of pressure in Fig. 5. Note that either fluctuations, special deformation modes or careful preparation procedures e.g. energy minimization techniques or manual construction [9, 23] may lead to jammed states at even lower density than $$\phi _{c}$$, from which starting to shear would lead to an increase of the jamming density (a mechanism which we could not clearly identify from our frictionless simulations due to very long relaxation times near jamming for soft particles). This suggests future studies in the presence of friction so that one has a wider range of jamming densities and lower density states will be much more stable as compared to the frictionless systems. In this work, we focused on fixed particle size polydispersity with uniform size distribution. We expect the effects of polydispersity [44] will have similar order of explorable jamming range as in this work, whereas friction etc. will cause larger explorable jamming ranges [92] and bigger changes in the calibrated parameters.
### 7.3 Shear jamming as consequence of a varying $$\phi _J(H)$$
Given an extremely simple model picture, starting from an isotropically unjammed system that was previously compressed or tapped (tempered), shear jamming is not anymore a new effect, but is just due to the shift of the state-variable (jamming density) to lower values during shear. In other words, shear jamming occurs when the state-variable $$\phi _J(H)$$ drops below the density $$\phi$$ of the system.
Even though dilatancy is that what is typically expected under shear (of a consolidated packing), also compactancy is observed in some cases [41] and can be readily explained by our model. Given a certain preparation protocol, typically a jamming density $$\phi _J> \phi _{c}$$ will be established for a sample, since the critical limit $$\phi _{c}$$ is very difficult to reach. When next a shear deformation is applied, it depends e.g. on the strain rate whether dilatancy or compaction will be observed: if the shear mode is “slower” than the preparation, or if $$\phi _J> \phi _{c}$$, dilatancy is expected as a consequence of the rapidly decreasing $$\phi _J$$ of the sample. In contrast, for a relatively “fast”, violent shear test (relative to the previous preparation and possibly relaxation procedure), compactancy also can be the result, due to an increase of $$\phi _J$$ during shear.
### 7.4 Rheology
The multiscale models presented in this study, based on data from frictionless particle simulations, imply that a superposition of the two fundamental deformation modes (isotropic and deviatoric, i.e. plane strain pure shear) is possible or, with other words, that the respective system responses are mostly decoupled as shown for the non-Newtonian rheology of simple fluids in Ref. [117]. Even though this decoupling is mostly consistent with our present data (the responses to isotropic and deviatoric deformations are mostly independent and can be measured independently), this separation and superposition cannot be taken for granted for more realistic granular and powder systems.
Nevertheless, the meso-scale model presented here, as based on a multi-scale energy landscape, explains compactancy and dilatancy, at constant confining stress, as caused by an increasing jamming density, or a decreasing jamming density, respectively (not shown). Similarly, at constant volume, the pressure either decreases or increases (pressure-dilatancy) due to an increasing or decreasing jamming density, respectively.
The model also allows to explain other rheological phenomena as shear-thinning (e.g., due to an increasing jamming density, at constant volume) or shear-thickening (e.g., due to a decreasing jamming density, at constant volume). As generalization of the present work, also the (granular) temperature (fluctuations of kinetic energy) can be considered, setting an additional (relaxation) time-scale, which effects the interplay between (shear) strain-rate and the evolution of the jamming density, so that even in a presumed “quasi-static” regime interesting new phenomena can be observed and explained.
### 7.5 Towards experimental validation
The history dependent jamming density $$\phi _J(H)$$ is difficult to access directly, but can consistently be extracted from other, experimentally measurable quantities, e.g. pressure p, coordination number $$C^{*}$$ or fraction of non-rattlers $$f_\mathrm {NR}$$. We explain the methodology to extract $$\phi _J(H)$$ experimentally, and confirm by indirect measurement, as detailed in Sect. 6, that the jamming density is indeed increasing during isotropic deformation and decreasing during shear, consistently also when deduced from these other quantities.
With other words, we do not claim that the jamming density is the only choice for the new state-variable that is needed. It can be replaced by any other isotropic quantity as, e.g. the isotropic fabric, the fraction of non-rattlers, the coordination number, or an empirical stress-free state that is extrapolated from pressure (which can be measured most easily), as long as this variable characterizes the packing “efficiency”.
Since an increased packing efficiency could be due to ordering (crystallization), we tried to, but could not trace any considerable crystallization and definitely no phase-separation. We attribute this to the polydispersity of the sizes of the particles used being in the range to avoid ordering effects, as studied in detail in Ref. [118]. Quantities like the coordination number, which can tremendously increase due to crystallization, did not display significant deviation from the random packing values and, actually, it even decreases in the unloading phases, relative to the initial loading phase, see Fig. 11b. This is not a proof that there is no crystallization going on, it is just not strong enough to be clearly seen. The reasons and micro-structural origin of the increased packing efficiency, as quantified by the new state-variable, are subject of ongoing research.
### 7.6 Outlook
Experiments should be performed to calibrate our model for suspended soft spheres (e.g. gels, almost frictionless) and real, frictional materials [119, 120, 121]. Over-compression is possible for soft materials, but not expected to lead to considerable relaxation due to the small possible compressive strain for harder materials. However, tapping or small-amplitude shear can take the role of over-compression, also leading to perturbations and increasing $$\phi _J$$, in contrast, large-amplitude shear leads to decreasing $$\phi _J$$ and can be calibrated indirectly from different isotropic quantities. Note that the accessible range of $$\phi _J- \phi _{c}$$ is expected to much increase for more realistic systems, e.g., with friction, for non-spherical particle shapes, or for cohesive powders.
From the theoretical side, a measurement of the multiscale energy landscape, e.g. the valley width, depth/shapes and their probabilities [81] should be done to verify our model-picture, as it remains qualitative so far. Finally, applying our model to glassy dynamics, ageing and re-juvenation, and frequency dependent responses, encompassing also stretched exponential relaxation, see e.g. Lieou and Langer [122], is another open challenge for future research. All this involves the temperature as a source of perturbations that affect the jamming density, and will thus also allow to understand more dynamic granular systems where the granular temperature is finite and not negligible as implied in most of this study for the sake of simplicity. A more complete theory for soft and granular matter, which involves also the (granular) temperature, is in preparation.
Last, but not least, while the macro/continuum model predicts a smooth evolution of the state variables, finite-size systems display (system-size dependent) fluctuations that only can be explained by a meso-scale stochastic model as proposed above, with particular statistics as predicted already by rather simple models in Refs. [28, 123, 124].
## Footnotes
1. 1.
Tapping or compression may not be technically equivalent to the protocol isotropic compression. In soil mechanics, the process of tapping may involve anisotropic compression or shear. The process of compression may be either isotropic or anisotropic or even involving shear. For example, a typical soil tests may include biaxial compression, conventional triaxial compression and true triaxial compression. In this work, in the context of compression, we always mean true isotropic in strain. In the context of tapping, we assume that the granular temperature, which is often assumed isotropic, does the work, even though the tapping process is normally not isotropic. So this is an oversimplification, and subject to future study since it was not detailed here.
2. 2.
This deformation mode represents the only fundamental deviatoric deformation motion (complementary to isotropic deformation), since axial strain can be superposed by two plane-strain modes, and because the plane-strain mode allows to study the non-Newtonian out-of-shear-plane response of the system (pressure dilatancy), whereas the axial mode does not. If superposition is allowed, as it seems to be the case for frictionless particles, studying only these two modes is minimal effort, however, we cannot directly extrapolate to more realistic materials.
3. 3.
For the isotropic deformation tests, we move the (virtual) walls and for the shear tests, we move all the grains according to an affine motion compatible with the (virtual) wall motion. For the case where only the (virtual) walls move some arching near the corners can be seen when there is a huge particle size dispersity or if there is a considerable particle friction (data not shown). For the small polydispersity and the frictionless spheres considered in this work, the system is and remains homogeneous and the macroscopic quantities are indistinguishable between the two methods, however, this must not be taken for granted in the presence of friction or cohesion, where wall motions other than by imposed homogeneous strain, can lead to undesired inhomogeneities in the periodic representative volume element.
4. 4.
The grains are soft and overlap $$\delta$$ increases with increasing compression ($$\phi$$). For a linear contact model, it has been shown in Refs. [100, 101] that $$\langle \delta \rangle /\langle r \rangle \propto \mathrm {ln}\left( \phi /\phi _J\right) = -\varepsilon _\mathrm {v}$$ (volumetric strain).
5. 5.
Note that the model in the form used here is ignoring the presence of kinetic energy fluctuations, referred to as granular temperature $$T_g$$, or fields like the so-called fluidity [90, 91, 115], that introduce an additional relaxation time-scale, as is subject of ongoing studies.
## Notes
### Acknowledgments
We thank Robert Behringer, Karin Dahmen, Itai Einav, Ken Kamrin, Mario Liu, Vitaliy Ogarko, Corey O’Hern, and Matthias Sperl, for valuable scientific discussions; critical comments and reviews from Vanessa Magnanimo and Olukayode Imole are gratefully acknowledged. This work was financially supported by the European Union funded Marie Curie Initial Training Network, FP7 (ITN-238577), PARDEM (www.pardem.eu) and the NWO STW-VICI project 10828.
### Conflict of interest
The authors declare no conflict of interest.
## References
1. 1.
Denisov, D., Dang, M.T., Struth, B., Wegdam, G., Schall, P.: Resolving structural modifications of colloidal glasses by combining X-ray scattering and rheology. Sci. Rep. 3, 1631 (2013)
2. 2.
Trappe, V., Prasad, V., Cipelletti, L., Segre, P.N., Weitz, D.A.: Jamming phase diagram for attractive particles. Nature 411, 772–775 (2001)
3. 3.
Walker, D.M., Tordesillas, A., Brodu, N., Dijksman, J.A., Behringer, R.P., Froyland, G.: Self-assembly in a near-frictionless granular material: conformational structures and transitions in uniaxial cyclic compression of hydrogel spheres. Soft matter 11, 2157–2173 (2015)
4. 4.
Wambaugh, J., Behringer, R., Matthews, J., Gremaud, P.: Response to perturbations for granular flow in a hopper. Phys. Rev. E 76, 051303 (2007)
5. 5.
Liu, A.J., Nagel, S.R.: Nonlinear dynamics: jamming is not just cool any more. Nature 396, 21–22 (1998)
6. 6.
Song, C., Wang, P., Makse, H.A.: A phase diagram for jammed matter. Nature 453, 629–632 (2008)
7. 7.
Bi, D., Zhang, J., Chakraborty, B., Behringer, R.P.: Jamming by shear. Nature 480, 355–358 (2011)
8. 8.
Zhang, H.P., Makse, H.A.: Jamming transition in emulsions and granular materials. Phys. Rev. E 72, 011301 (2005)
9. 9.
O’Hern, C.S., Silbert, L.E., Liu, A.J., Nagel, S.R.: Jamming at zero temperature and zero applied stress: the epitome of disorder. Phys. Rev. E 68, 011306 (2003)
10. 10.
Silbert, L.E.: Jamming of frictional spheres and random loose packing. Soft Matter 6, 2918–2924 (2010)
11. 11.
Silbert, L.E., Liu, A.J., Nagel, S.R.: Structural signatures of the unjamming transition at zero temperature. Phys. Rev. E 73, 041304 (2006)
12. 12.
Otsuki, M., Hayakawa, H.: Critical scaling near jamming transition for frictional granular particles. Phys. Rev. E 83, 051301 (2011)
13. 13.
Wang, K., Song, C., Wang, P., Makse, H.A.: Edwards thermodynamics of the jamming transition for frictionless packings: ergodicity test and role of angoricity and compactivity. Phys. Rev. E 86, 011305 (2012)
14. 14.
Silbert, L.E., Liu, A.J., Nagel, S.R.: Normal modes in model jammed systems in three dimensions. Phys. Rev. E 79, 021308 (2009)
15. 15.
Pica Ciamarra, M., Coniglio, A.: Jamming at zero temperature, zero friction, and finite applied shear stress. Phys. Rev. Lett. 103, 235701 (2009)
16. 16.
Van Hecke, M.: Jamming of soft particles: geometry, mechanics, scaling and isostaticity. J. Phys. Condens. Matter 22, 033101 (2010)
17. 17.
Banigan, E.J., Illich, M.K., Stace-Naughton, D.J., Egolf, D.A.: The chaotic dynamics of jamming. Nat. Phys. 9, 288–292 (2013)
18. 18.
Liu, A.J., Nagel, S.R.: The jamming transition and the marginally jammed solid. Annu. Rev. Condens. Matter Phys. 1, 347–369 (2010)
19. 19.
Cates, M.E., Wittmer, J.P., Bouchaud, J.P., Claudin, P.: Jamming, force chains, and fragile matter. Phys. Rev. Lett. 81, 1841 (1998)
20. 20.
Majmudar, T.S., Sperl, M., Luding, S., Behringer, R.P.: Jamming transition in granular systems. Phys. Rev. Lett. 98, 058001 (2007)
21. 21.
Coulais, C., Behringer, R.P., Dauchot, O.: How the ideal jamming point illuminates the world of granular media. Soft Matter 10, 1519–1536 (2014)
22. 22.
Reichhardt, C., Reichhardt, C.J.O.: Aspects of jamming in two-dimensional athermal frictionless systems. Soft Matter 10, 2932–2944 (2014)
23. 23.
Torquato, S., Stillinger, F.H.: Jammed hard-particle packings: from Kepler to Bernal and beyond. Rev. Mod. Phys. 82, 2633 (2010)
24. 24.
Dagois-Bohy, S., Tighe, B.P., Simon, J., Henkes, S., van Hecke, M.: Soft-sphere packings at finite pressure but unstable to shear. Phys. Rev. Lett. 109, 095703 (2012)
25. 25.
Parisi, G., Zamponi, F.: Mean-field theory of hard sphere glasses and jamming. Rev. Mod. Phys. 82, 789–845 (2010)
26. 26.
Inagaki, S., Otsuki, M., Sasa, S.: Protocol dependence of mechanical properties in granular systems. Eur. Phys. J. E 34, 124 (2011)
27. 27.
Métayer, J.F., Suntrup, D.J., Radin, C., Swinney, H.L., Schröter, M.: Shearing of frictional sphere packings. Europhys. Lett. 93, 64003 (2011)
28. 28.
Saitoh, K., Magnanimo, V., Luding, S.: A master equation for the probability distribution functions of forces in soft particle packings. Soft Matter 11, 1253–1258 (2015)
29. 29.
Farhadi, S., Behringer, R.P.: Dynamics of sheared ellipses and circular disks: effects of particle shape. Phys. Rev. Lett. 112, 148301 (2014)
30. 30.
Radjai, F., Wolf, D.E., Jean, M., Moreau, J.: Bimodal character of stress transmission in granular packings. Phys. Rev. Lett. 80, 61 (1998)
31. 31.
Snoeijer, J.H., Ellenbroek, W.G., Vlugt, T.J.H., van Hecke, M.: Sheared force networks: anisotropies, yielding, and geometry. Phys. Rev. Lett. 96, 098001 (2006)
32. 32.
Lerner, E., Düring, G., Wyart, M.: Toward a microscopic description of flow near the jamming threshold. Europhys. Lett. 99, 58003 (2012)
33. 33.
Brown, E., Jaeger, H.: Dynamic jamming point for shear thickening suspensions. Phys. Rev. Lett. 103, 086001 (2009)
34. 34.
Suzuki, K., Hayakawa, H.: Divergence of viscosity in jammed granular materials: a theoretical approach. Phys. Rev. Lett. 115, 098001 (2015)
35. 35.
Wyart, M., Cates, M.E.: Discontinuous shear thickening without inertia in dense non-Brownian suspensions. Phys. Rev. Lett. 112, 098302 (2014)
36. 36.
Otsuki, M., Hayakawa, H.: Critical scaling of a jammed system after a quench of temperature. Phys. Rev. E 86, 031505 (2012)
37. 37.
Otsuki, M., Hayakawa, H.: Avalanche contribution to shear modulus of granular materials. Phys. Rev. E 90, 042202 (2014)
38. 38.
Peyneau, P.-E., Roux, J.-N.: Frictionless bead packs have macroscopic friction, but no dilatancy. Phys. Rev. E 78, 011307 (2008)
39. 39.
Schall, P., van Hecke, M.: Shear bands in matter with granularity. Annu. Rev. Fluid Mech. 42, 67 (2009)
40. 40.
Singh, A., Magnanimo, V., Saitoh, K., Luding, S.: Effect of cohesion on shear banding in quasistatic granular materials. Phys. Rev. E 90(2), 022202 (2014)
41. 41.
Imole, O.I., Kumar, N., Magnanimo, V., Luding, S.: Hydrostatic and shear behavior of frictionless granular assemblies under different deformation conditions. KONA Powder Part. J. 30, 84–108 (2013)
42. 42.
Ciamarra, M.P., Pastore, R., Nicodemi, M., Coniglio, A.: Jamming phase diagram for frictional particles. Phys. Rev. E 84, 041308 (2011)
43. 43.
Imole, O.I., Wojtkowski, M., Magnanimo, V., Luding, S.: Micro-macro correlations and anisotropy in granular assemblies under uniaxial loading and unloading. Phys. Rev. E 89(4), 042210 (2014)
44. 44.
Kumar, N., Imole, O.I., Magnanimo, V., Luding, S.: Effects of polydispersity on the micro-macro behavior of granular assemblies under different deformation paths. Particuology 12, 64–79 (2014)
45. 45.
Zhao, J., Guo, N.: Unique critical state characteristics in granular media considering fabric anisotropy. Géotechnique 63, 695–704 (2013)
46. 46.
Guo, N., Zhao, J.: The signature of shear-induced anisotropy in granular media. Comput. Geotech. 47, 1–15 (2013)
47. 47.
Zhang, J., Majmudar, T.S., Sperl, M., Behringer, R.P.: Jamming for a 2D granular material. Soft Matter 6, 2982–2991 (2010)
48. 48.
Wang, X., Zhu, H.P., Luding, S., Yu, A.B.: Regime transitions of granular flow in a shear cell: a micromechanical study. Phys. Rev. E 88, 032203 (2013)
49. 49.
Walker, D.M., Tordesillas, A., Ren, J., Dijksman, J.A., Behringer, R.P.: Uncovering temporal transitions and self-organization during slow aging of dense granular media in the absence of shear bands. Europhys. Lett. 107, 18005 (2014)
50. 50.
Brown, R.L., Hawksley, P.G.W.: Packing of regular (spherical) and irregular particles. Nature 156, 421–422 (1945)
51. 51.
Charbonneau, P., Corwin, E.I., Parisi, G., Zamponi, F.: Universal microstructure and mechanical stability of jammed packings. Phys. Rev. Lett. 109, 205501 (2012)
52. 52.
Olsson, P., Teitel, S.: Critical scaling of shearing rheology at the jamming transition of soft-core frictionless disks. Phys. Rev. E. 83, 030302 (2011)
53. 53.
Olsson, P., Teitel, S.: Athermal jamming versus thermalized glassiness in sheared frictionless particles. Phys. Rev. E 88, 010301 (2013)
54. 54.
Ozawa, M., Kuroiwa, T., Ikeda, A., Miyazaki, K.: Jamming transition and inherent structures of hard spheres and disks. Phys. Rev. Lett. 109, 205701 (2012)
55. 55.
O’Hern, C.S., Langer, S.A., Liu, A.J., Nagel, S.R.: Force distributions near jamming and glass transitions. Phys. Rev. Lett. 86, 111 (2001)
56. 56.
Torquato, S., Truskett, T.M., Debenedetti, P.G.: Is random close packing of spheres well defined? Phys. Rev. Lett. 84, 2064 (2000)
57. 57.
Mari, R., Krzakala, F., Kurchan, J.: Jamming versus glass transitions. Phys. Rev. Lett. 103, 025701 (2009)
58. 58.
Bandi, M.M., Rivera, M.K., Krzakala, F., Ecke, R.E.: Fragility and hysteretic creep in frictional granular jamming. Phys. Rev. E 87(4), 042205 (2013)
59. 59.
Ashwin, S.S., Yamchi, M.Z., Bowles, R.K.: Inherent structure landscape connection between liquids, granular materials, and the jamming phase diagram. Phys. Rev. Lett. 110, 145701 (2013)
60. 60.
Vågberg, D., Olsson, P., Teitel, S.: Glassiness, rigidity, and jamming of frictionless soft core disks. Phys. Rev. E. 83, 031307 (2011)
61. 61.
Chaudhuri, P., Berthier, L., Sastry, S.: Jamming transitions in amorphous packings of frictionless spheres occur over a continuous range of volume fractions. Phys. Rev. Lett. 104, 165701 (2010)
62. 62.
Zhao, C., Tian, K., Xu, N.: New jamming scenario: from marginal jamming to deep jamming. Phys. Rev. Lett. 106, 125503 (2011)
63. 63.
Ciamarra, M.P., Nicodemi, M., Coniglio, A.: Recent results on the jamming phase diagram. Soft Matter 6, 2871–2874 (2010)
64. 64.
Vinutha, H.A., Sastry, S.: Disentangling the role of structure and friction in shear jamming. Nat. Phys. 12, 578–583 (2016)Google Scholar
65. 65.
Ren, J., Dijksman, J.A., Behringer, R.P.: Reynolds pressure and relaxation in a sheared granular system. Phys. Rev. Lett. 110, 018302 (2013)
66. 66.
Grob, M., Heussinger, C., Zippelius, A.: Jamming of frictional particles: a nonequilibrium first-order phase transition. Phys. Rev. E 89, 050201 (2014)
67. 67.
Bardet, J.P.: Observations on the effects of particle rotations on the failure of idealized granular materials. Mech. Mater. 18, 159–182 (1994)
68. 68.
Goddard, J.D.: Nonlinear elasticity and pressure-dependent wave speeds in granular media. Proc. R. Soc. Lond. A 430, 105–131 (1990)
69. 69.
Kumar, N., Luding, S., Magnanimo, V.: Macroscopic model with anisotropy based on micro-macro information. Acta Mech. 225, 2319–2343 (2014)
70. 70.
Sibille, L., Nicot, F., Donzé, F.-V., Darve, F.: Analysis of failure occurrence from direct simulations. Eur. J. Environ. Civil Eng. 13, 187–201 (2009)
71. 71.
Clayton, C.R.I.: Stiffness at small strain: research and practice. Géotechnique 61, 5–37 (2011)
72. 72.
Addenbrooke, T.I., Potts, D.M., Puzrin, A.M.: The influence of pre-failure soil stiffness on the numerical analysis of tunnel construction. Geotechnique 47(3), 693–712 (1997)
73. 73.
Campbell, C.: Granular material flows: an overview. Powder Technol. 162, 208–229 (2006)
74. 74.
Griffiths, D.V., Lane, P.A.: Slope stability analysis by finite elements. Geotechnique 49, 387–403 (1999)
75. 75.
Einav, I., Puzrin, A.M.: Pressure-dependent elasticity and energy conservation in elastoplastic models for soils. J. Geotech. Geoenviron. Eng. 130, 81–92 (2004)
76. 76.
Jamiolkowski, M.B., Lancellotta, R., Lo Presti, D.: Pre-failure deformation characteristics of geomaterials. In: Proceedings of the Second International Symposium on Pre-Failure Deformation Characteristics of Geomaterials: Torino 99: Torino, Italy 28–30 September, 1999, vol. 1. CRC Press, (1999)Google Scholar
77. 77.
Tutluoğlu, L., Öge, İ.F., Karpuz, C.: Relationship between pre-failure and post-failure mechanical properties of rock material of different origin. Rock Mech. Rock Eng. 48, 121–141 (2015)
78. 78.
Singh, A., Magnanimo, V., Saitoh, K., Luding, S.: The role of gravity or pressure and contact stiffness in granular rheology. New J. Phys. 17, 043028 (2015)
79. 79.
Hartley, R.R., Behringer, R.P.: Logarithmic rate dependence of force networks in sheared granular materials. Nature 421, 928–931 (2003)
80. 80.
Krzakala, F., Kurchan, J.: Landscape analysis of constraint satisfaction problems. Phys. Rev. E. 76, 021122 (2007)
81. 81.
Xu, N., Frenkel, D., Liu, A.J.: Direct determination of the size of basins of attraction of jammed solids. Phys. Rev. Lett. 106, 245502 (2011)
82. 82.
Möbius, R., Heussinger, C.: (Ir) reversibility in dense granular systems driven by oscillating forces. Soft Matter 10, 4806–4812 (2014)
83. 83.
Rognon, P.G., Roux, J.-N., NaaIM, M., Chevoir, F.: Dense flows of cohesive granular materials. J. Fluid Mech. 596, 21–47 (2008)
84. 84.
Sun, J., Sundaresan, S.: A constitutive model with microstructure evolution for flow of rate-independent granular materials. J. Fluid Mech. 682, 590–616 (2011)
85. 85.
Jiang, Y., Liu, M.: Incremental stress-strain relation from granular elasticity: comparison to experiments. Phys. Rev. E 77, 021306 (2008)
86. 86.
Jiang, Y., Liu, M.: Applying GSH to a wide range of experiments in granular media. Eur. Phys. J. E 38, 15 (2015)
87. 87.
Mohan, L.S., Rao, K.K., Nott, P.R.: A frictional Cosserat model for the slow shearing of granular materials. J. Fluid Mech. 457, 377–409 (2002)
88. 88.
Göncü, F.: Mechanics of Granular Materials: Constitutive Behavior and Pattern Transformation. TU Delft, Delft University of Technology, (2012). ISBN 9789461913418Google Scholar
89. 89.
Tejchman, J.: FE Modeling of Shear Localization in Granular Bodies with Micro-polar Hypoplasticity. Springer Series in Geomechanics and Geoengineering. Springer, Berlin (2008)Google Scholar
90. 90.
Kamrin, K., Koval, G.: Nonlocal constitutive relation for steady granular flow. Phys. Rev. Lett. 108, 178301 (2012)
91. 91.
Henann, D.L., Kamrin, K.: A predictive, size-dependentcontinuum model for dense granular flows. Proc. Natl. Acad. Sci. 110, 6730–6735 (2013)
92. 92.
Luding, S.: Granular matter: so much for the jamming point. Nat. Phys. 12, 531–532 (2016)Google Scholar
93. 93.
Luding, S.: Anisotropy in cohesive, frictional granular media. J. Phys. Condens. Matter 17, S2623–S2640 (2005)
94. 94.
Ezaoui, A., Di Benedetto, H.: Experimental measurements of the global anisotropic elastic behaviour of dry Hostun sand during triaxial tests, and effect of sample preparation. Géotechnique 59, 621–635 (2009)
95. 95.
Magnanimo, V., La Ragione, L., Jenkins, J.T., Wang, P., Makse, H.A.: Characterizing the shear and bulk moduli of an idealized granular material. Europhys. Lett. 81, 34006 (2008)
96. 96.
La Ragione, L., Magnanimo, V.: Contact anisotropy and coordination number for a granular assembly: a comparison of distinct-element-method simulations and theory. Phys. Rev. E 85, 031304 (2012)
97. 97.
Christoffersen, J., Mehrabadi, M.M., Nemat-Nasser, S.: A micro-mechanical description of granular material behavior. J. Appl. Mech. 48, 339–344 (1981)
98. 98.
Kumar, N., Imole, O.I., Magnanimo, V., Luding, S.: Evolution of the effective moduli for anisotropic granular materials during shear. In: Luding, S., Yu, A. (eds.) Powders & Grains 2013, pp. 1238–1241. Balkema, Sydney (2013)Google Scholar
99. 99.
Zhang, J., Majmudar, T., Tordesillas, A., Behringer, R.: Statistical properties of a 2D granular material subjected to cyclic shear. Granul. Matter 12, 159–172 (2010)
100. 100.
Göncü, F., Durán, O., Luding, S.: Constitutive relations for the isotropic deformation of frictionless packings of polydisperse spheres. C. R. Mécanique 338, 570–586 (2010)
101. 101.
Kumar, N., Magnanimo, V., Ramaioli, M., Luding, S.: Tuning the bulk properties of granular mixtures by small amount of fines. Powder Technol. 293, 94–112 (2016)
102. 102.
García-Rojo, R., Alonso-Marroquín, F., Herrmann, H.J.: Characterization of the material response in granular ratcheting. Phys. Rev. E 72, 041302 (2005)
103. 103.
O’Hern, C.S., Langer, S.A., Liu, A.J., Nagel, S.R.: Random packings of frictionless particles. Phys. Rev. Lett. 88, 075507 (2002)
104. 104.
Rosato, A.D., Dybenko, O., Horntrop, D.J., Ratnaswamy, V., Kondic, L.: Microstructure evolution in density relaxation by tapping. Phys. Rev. E 81, 061301 (2010)
105. 105.
Hopkins, A.B., Stillinger, F.H., Torquato, S.: Disordered strictly jammed binary sphere packings attain an anomalously large range of densities. Phys. Rev. E 88, 022205 (2013)
106. 106.
Richard, P., Nicodemi, M., Delannay, R., Ribiere, P., Bideau, D.: Slow relaxation and compaction of granular systems. Nat. Mater. 4, 121–128 (2005)
107. 107.
Knight, J.B., Fandrich, C.G., Lau, C.N., Jaeger, H.M., Nagel, S.R.: Density relaxation in a vibrated granular material. Phys. Rev. E 51, 3957 (1995)
108. 108.
Andreotti, B., Forterre, Y., Pouliquen, O.: Granular Media: Between Fluid and Solid. Cambridge University Press, Cambridge (2013)
109. 109.
Hidalgo, R.C., Grosse, C.U., Kun, F., Reinhardt, H.W., Herrmann, H.J.: Evolution of percolating force chains in compressed granular media. Phys. Rev. Lett. 89, 205501 (2002)
110. 110.
Smith, K.C., Fisher, T.S., Alam, M.: Isostaticity of constraints in amorphous jammed systems of soft frictionless platonic solids. Phys. Rev. E 84, 030301 (2011)
111. 111.
Atkinson, S., Stillinger, F.H., Torquato, S.: Existence of isostatic, maximally random jammed monodisperse hard-disk packings. Proc. Natl. Acad. Sci. 111, 18436–18441 (2014)
112. 112.
Göncü, F., Durán, O., Luding, S.: Jamming in frictionless packings of spheres: determination of the critical volume fraction. In: AIP Conference on Proceedings of Powders and Grains 2009, vol. 1145, pp. 531–534 (2009)Google Scholar
113. 113.
Živković, S., Jakšić, Z.M., Arsenović, D., Budinski-Petković, L., Vrhovac, S.B.: Structural characterization of two-dimensional granular systems during the compaction. Granul. Matter 13, 493–502 (2011)
114. 114.
Luding, S., Nicolas, M., Pouliquen, O.: A minimal model for slowdynamics: compaction of granular media under vibration or shear. In: Kolymbas, D., Fellin, W. (eds.) Compaction of Soils, Granulates and Powders, pp. 241–249. A. A. Balkema, Rotterdam (2000)Google Scholar
115. 115.
Darnige, T., Bruand, A., Clement, E.: Creep and fluidity of a real granular packing near jamming. Phys. Rev. Lett. 107, 138303 (2011)
116. 116.
Ness, C., Sun, J.: Two-scale evolution during shear reversal in dense suspensions (2015). arXiv preprint arXiv:1509.01530
117. 117.
Hartkamp, R., Todd, B.D., Luding, S.: A constitutive framework for the non-Newtonian pressure tensor of a simple fluid under planar flows. J. Chem. Phys. 138, 244508 (2013)
118. 118.
Ogarko, V., Luding, S.: Prediction of polydisperse hard-sphere mixture behavior using tridisperse systems. Soft Matter 9, 9530–9534 (2013)
119. 119.
Brujić, J., Song, C., Wang, P., Briscoe, C., Marty, G., Makse, H.A.: Measuring the coordination number and entropy of a 3D jammed emulsion packing by confocal microscopy. Phys. Rev. Lett. 98, 248001 (2007)
120. 120.
Peidong, Y., Frank-Richter, S., Börngen, A., Sperl, M.: Monitoring three-dimensional packings in microgravity. Granul. Matter 16, 165–173 (2014)
121. 121.
Brodu, N., Dijksman, J.A., Behringer, R.P.: Spanning the scales of granular materials through microscopic force imaging. Nat. Commun. 6, 6361 (2015)
122. 122.
Lieou, C.K.C., Langer, J.S.: Nonequilibrium thermodynamics in sheared hard-sphere materials. Phys. Rev. E 85, 061308 (2012)
123. 123.
Dahmen, K.A., Ben-Zion, Y., Uhl, J.T.: Micromechanical model for deformation in solids with universal predictions for stress–strain curves and slip avalanches. Phys. Rev. Lett. 102, 175501 (2009)
124. 124.
Dahmen, K.A., Ben-Zion, Y., Uhl, J.T.: A simple analytic theory for the statistics of avalanches in sheared granular materials. Nat. Phys. 7, 554–557 (2011)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9174265265464783, "perplexity": 3341.842452456835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986661296.12/warc/CC-MAIN-20191016014439-20191016041939-00007.warc.gz"}
|
http://www.physicsforums.com/showthread.php?t=441187
|
This isn't really a homework problem. In lab we had an unknown buffer solution and we had to to titrate it with NaOH and HCl to try to identify what the buffer is. In the end I got my pka value equal to 4.52 I can't identify the buffer. My Ka is 3E-5 and I can't match any Ka values to this. Did I just totally screw up my lab??
PhysOrg.com chemistry news on PhysOrg.com >> New method for producing clean hydrogen>> Making ice-cream more nutritious with meat left-overs>> Non-wetting fabric drains sweat
Admin Is there a list of buffers that you have to select from? Or can it be anything?
Nope, we weren't given a list to choose from so I'm guessing it could be anything?
Probably. Although some compounds are much more likely than others.
How did you get your pKa?
I required 19.92 mL of NaOH to neutralize a 10 mL portion of my buffer. I did the same thing using another 10 mL portion of the buffer but this time with HCl and it took me 17 mL. I then used the henderson-hasselbalch equation to find my pKa. I measured my pH of my buffer with a pH meter and got it to be 4.40. I had already standardized my acid and base previously and got 0.1472 M NaOH and 0.1325 M HCl. Also I made sure to change the ml to L for the following calculations. For the concentration of my base and acid I did this: [A] = (19.92 ml NaOH x 0.1472 M NaOH)/10 mL = 0.293 [B] = (17ml HCl x 0.1325 M HCl)/10 ml =0.225 4.40= pka + log [0.225]/[0.293] pka= 4.52
Posted something before thinking it was right... still wrong.
Quote by kooombaya I required 19.92 mL of NaOH to neutralize a 10 mL portion of my buffer. I did the same thing using another 10 mL portion of the buffer but this time with HCl and it took me 17 mL.
What do you mean by "neutralize the buffer"?
Quote by Borek What do you mean by "neutralize the buffer"?
This was how the question was stated in the lab book.
I found out what I did wrong by the way. Thanks for your time.
Quote by kooombaya This was how the question was stated in the lab book.
Can you explain what they meant? I have never seen something like that, even if it is wrong, I have nothing against knowing.
Quote by Borek Can you explain what they meant? I have never seen something like that, even if it is wrong, I have nothing against knowing.
From what I understood it goes something like this:
Say we added 10 mL of 0.1 M NaOH. Then this is the amount of acid in the buffer solution that reacted with the NaOH to reach a new equivalence point.
It's the same for HCl, except the HCl acts with the base in the buffer solution to reach a new equivalence point.
Admin So it was just shifting pH of the buffer by addition of strong acid or base. There was an acid base reaction involved (which can be technically called neutralization), but it didn't end with neutral solution. Not the best wording if you ask me. -- buffer calculator, concentration calculator pH calculator, stoichiometry calculator
Quote by Borek So it was just shifting pH of the buffer by addition of strong acid or base. There was an acid base reaction involved (which can be technically called neutralization), but it didn't end with neutral solution. Not the best wording if you ask me.
Yup exactly. Thanks again.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.808948814868927, "perplexity": 1703.5455868245642}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702718570/warc/CC-MAIN-20130516111158-00067-ip-10-60-113-184.ec2.internal.warc.gz"}
|
http://www.imsolidstate.com/
|
### M17x 6990m / 6970m overheating
How to fix AMD 6900 series cards overheating in the Alienware M17x. › Continue reading
Tags:
Monday, March 31st, 2014 Uncategorized No Comments
### PAR / Spectrum analyzer
I’ve been working on an off for a while on a project to measure photosynthetically active radiation (PAR) as well as analyze electromagnetic spectrum with the same sensor, at the same time. The spectrum in question is approximately 350-750nm. I mainly envisioned this tool for use with marine aquariums, and if things were going better I probably would have built a prototype.
The sensor is a TAOS TSL3301CL. It is a 102-pixel photodiode array with a serial interface. The sensor contains analog stages for gain and offset as well as ADCs that sample the values and read the results to the serial port. It’s a nice device, but I haven’t been able to get what I want out of it. The response for me has been very non-linear. It’s also a very tiny leadless package so it’s not easy to work with.
My plan was to enclose the sensor in a housing with a series of lenses designed to introduce chromatic aberration or a prism to refract the captured ambient light, and then direct the spectral components of the light towards the sensor. The sensor’s response to wavelength is non-linear, but that can be corrected with a function in software for each pixel.
I had a uOLED 128×128 pixel display that I used to display the intensity of light for each pixel. This worked well, and I would have used an averaging function to estimate PAR from that data.
However, I haven’t been able to get the sensor to respond in what I consider a linear fashion. It is mostly on or off. Moving the light source back and forth from the sensor doesn’t result in any gradient that could be considered measurable. I’ve tried various gain and offset levels, as well as long and short integration times but with no success.
I originally tested it with an mbed, but I had trouble with the synchronous clock. So I ported the code to the mega2560 on my STK600. About that time the display started to die and I don’t have another graphic display at the moment without messing around with an Epson S1D1335 and another graphics library… As such I’m stashing this project for now. If anyone has worked with these sensors before and knows what I’m doing wrong, I’d like to hear from you.
Tags: , ,
Saturday, March 22nd, 2014 Electronics No Comments
### Acrylic polishing and scratch removal
I recently acquired a 46 gallon ZeroEdge aquarium that was in pretty rough shape. There were a lot of scratches as well as marks from calcareous algae. Here is what I did to repair it. › Continue reading
Wednesday, December 11th, 2013 Uncategorized No Comments
### 330W power supply for M17x update
So everyone that has done the 330W power supply mod that I posted earlier has experienced the power supply shutting down at 240 watts of power draw. That is pretty counterproductive since the M17x ships with a 240W power supply. I did some reverse engineering and some load testing and figured out what the problem is.
I built a simple dynamic load from a few resistors, two op-amps, and an IGBT that I salvaged from an old motor drive. I attached the schematic at the bottom for anyone that wants to build a similar device. I didn’t have a small enough current sense shunt resistor to handle the current, so I used feedback from the gate-emitter voltage since it is roughly proportional to collector-emitter current after about 10 volts. I also used an MC34072 op-amp since it’s what I had laying around. It’s a bit crude but it works.
The dynamic load let me test the power supply and confirm that it was shutting down at 240W. It did, with the highest power I could get at the output being 19.6V @ 12.5A, roughly 245 watts. I also noticed quite a bit of buzzing.
It is pretty unlikely that Dell would make a power supply that badly, and it successfully powers the M18x so I took a closer look at the only thing that could have any effect on the power supply: the ID wire. When I figured out how to put the 240W 1-wire ID chip in place of the 330W ID chip, I found out that the M17x couldn’t drive the 1-wire bus. Something was loading it down farther down the line. Cutting the ID trace after the 1-wire PROM fixed the issue and allowed the M17x to drive the bus high and charge the PROM so it would work (the PROM is parasitically powered). The only thing that could have an effect was whatever was behind that trace. › Continue reading
Wednesday, July 10th, 2013 Electronics 10 Comments
### Continuous vs. batch water changes
I have been wondering lately about the effectiveness of building a continuous or automatic daily water change system for my reef aquarium. I really hate batch water changes, but they seem like they would be more effective. I decided to do the math and find out if that was true. A continuous water change can be modeled as a differential equation:
$y'(t) = (f_1)(\frac{y(t)}{V}) - (f_2)(\frac{y(t)}{V})$
where $y(t)$ is the amount of some dissolved substance at time $t$, $f$ is the flow rate in and out of the system, and $V$ is the volume of the tank. We can simplify the model by making some assumptions: a continuous water change will consist of a small volume over a long period of time, thus the flow rate will be low enough to assume complete mixing in the high water flows of a reef aquarium. Flow rates should be chosen to provide an equivalent reduction of dissolved material as a weekly batch water change, however the rate is not of interest here since we are purely interested in comparing the amount of dissolved material reduction with a batch water change, not how long it will take. It will be easiest to calculate volume with a flow rate of 1 liter per hour. We can also assume an input concentration of 0, since the water coming in should be from an RO/DI system. This gives:
$y'(t) = - \frac{y(t)}{284}$
noting that my aquarium $V$ is 75 gallons, or 284 liters. We will also set this up as an initial value problem, with $y(0)$ being an initial concentration of 50 mg/L of nitrate. The goal is reduction to 45 mg/L which we will compare to a batch water change later. For my 284L aquarium, the total concentration of nitrate would be $50 mg/L * 284L = 14200 mg$. This gives:
$y'(t) = - \frac{y(t)}{284}, y(0) = 14200$
The particular solution for this IVP is:
$y(t) = 14200\ e^{\frac{-t}{284}}$
Sunday, November 18th, 2012 Uncategorized 1 Comment
### 330 Watt power supply for Alienware M17x
I successfully managed to modify the M18x 330W power supply to work in the M17x, which allows for running the M17x with a fast processor and SLI / crossfire. [Update: some people are having issues with the 330W PS running a 920XM processor with 7970m CrossfireX. This combination draws more than than 330W in the M17xR2 for some reason. I have an 840QM and 6990m CrossfireX with no issues, about 200W average] The modification is easier if you already have a 240W power supply, since you will already have the DS2502 1-wire EPROM that is required for the mod. If you don’t have a 240W supply you can also order a DS2502 and program it manually with the 1-wire programmer I posted here. › Continue reading
Tags: , ,
Saturday, October 13th, 2012 Electronics 25 Comments
### mbed 1-wire EPROM driver (DS2502)
I wrote some code for my mbed to read and program the memory contents of 1-wire EPROMs like the DS2502. It should work with any device that responds to the same commands. The code can read ROM, status registers and memory pages, and write to the status register and memory pages. I also incorporated support for cyclical redundancy checks since the devices aren’t erasable. I had to build an external circuit for the 12 volt programming pulse to protect the mbed signal pin. If you only need to read you don’t need this, but it is required if you want to program data. Download link to the project source files is below.
Tags: , , ,
Friday, October 5th, 2012 Electronics 7 Comments
### M17x inverter brightness fix
After I killed my original motherboard modifying stuff in my 6970m quest, the brightness of my LCD was stuck at low with both the new R1 and R2 motherboards. The function keys couldn’t adjust brightness and neither could windows. Here’s how I fixed it. This is probably only valid for a CCFL backlit display.
The inverter that drives the fluorescent lamps in my display is based on a MAX8759. This chip has an SMBus interface as well as an ambient light sensor interface and a PWM input. The motherboard uses the SMBus interface to control the inverter. You can directly write values to a brightness register, from 0×00 to 0xFF for minimum to maximum brightness. The function keys send incrementally smaller or larger values to the controller. I pulled out my logic analyzer, mbed, and realterm to watch the bus communication and communicate with the inverter controller.
The communication from the EC to the inverter controller is correct, and you can read the brightness register and see the values change based on the function keys. However the brightness continues to stay low. You can also read the fault register, but no faults were present in my case.
What I figured out was the controller by default uses a mode called “SMBus with DPST”, which takes the SMBus brightness value and multiplies it by the PWM duty cycle. This apparently allows another interface to use the PWM input and dim the display without needing to access the bus. The problem was the PWM duty cycle from the EC was 0%, so the controller kept the brightness at 0 regardless of the SMBus setting. › Continue reading
Tags: , ,
Thursday, October 4th, 2012 Electronics No Comments
### 6970m power issues
I was recently working on getting an AMD 6970m working in an R1 M17x. I didn’t get it working, but that’s probably largely due to the fact that I was testing with a bad card.
When the computer powers on, before it POSTs, it applies voltage to the three power rails for the graphics card. The graphics card is supposed to turn on its regulators and when things stabilize assert the MXM power_good signal. This happens before any signals are applied to the card’s busses. However if one of the regulators is out of spec, the power_good signal remains low and the computer will not even start POST so you won’t get any error codes, just a black screen.
I looked up the datasheets for the card’s two main regulators and measured the voltages produced. The regulator that I think probably runs the memory chips (APL5913) was spot-on but the regulator for the core (ISL6228) was off by about a tenth of a volt. Hence the failure to release power_good and continue boot. › Continue reading
Tags: ,
Monday, October 1st, 2012 Electronics 1 Comment
### Upgrade M17x R1 to R2 motherboard
I have an Alienware M17x that has the Core 2 Quad processor. This isn’t a bad setup, but an i7 in the R2 is much better. There is also the restriction that the R1 can only run the GTX 260m, 280m or AMD 5870m. This makes the M17x mostly obsolete for any modern games. I tried quite a few things to get an AMD 6970m running in the R1 (see here), including modifying the MXM structure in the BIOS. I wasn’t successful, mostly because I eventually figured out that the card I got off eBay to test with was bad. However, I did figure out that an R2 motherboard can replace the R1 motherboard. You will of course need a new processor and the CPU/southbridge is in a different location so you also need an R2 heatsink. Everything else is compatible, and the R2 will run the 6970m.
The only things I had to modify was the LVDS link width for the R1 LCD and the magnesium plate that covers the motherboard. Since the heatsink is in a different location you need to relieve a couple spots.
For the LVDS connection the BIOS for the R2 specifies a 24-bit width and the R1 is an 18-bit width. I modified the MXM system info in the BIOS to fix that and updated the checksum. Then I flashed it to the motherboard. I did this before I figured out the 6970m was dead, so I don’t know if the R1 LCD will work or not without running a modified BIOS since it never POSTed and I fixed the card after flashing the BIOS. › Continue reading
Tags: , ,
Saturday, September 1st, 2012 Electronics 5 Comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 11, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4452114403247833, "perplexity": 1682.8401515340888}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510256757.9/warc/CC-MAIN-20140728011736-00455-ip-10-146-231-18.ec2.internal.warc.gz"}
|
http://www.gradesaver.com/textbooks/math/trigonometry/trigonometry-10th-edition/chapter-2-acute-angles-and-right-triangles-section-2-2-trigonometric-functions-of-non-acute-angles-2-2-exercises-page-59/29
|
# Chapter 2 - Acute Angles and Right Triangles - Section 2.2 Trigonometric Functions of Non-Acute Angles - 2.2 Exercises: 29
$sin$(-300)$^{\circ}$ = $\frac{\sqrt3}{2}$ $cos$(-300)$^{\circ}$ = $\frac{1}{2}$ $tan$(-300)$^{\circ}$ = $\sqrt3$ $cot$(-300)$^{\circ}$ = $\frac{\sqrt3}{3}$ $csc$(-300)$^{\circ}$ = $\frac{2\sqrt3}{3}$ $sec$(-300)$^{\circ}$ = 2
#### Work Step by Step
-300$^{\circ}$ We can solve for the functions by using the coterminal angle. We can find the coterminal angle by adding or subtracting 360$^{\circ}$ as many times as needed. -300$^{\circ}$ + 360$^{\circ}$ = 60$^{\circ}$ $sin$(60)$^{\circ}$ = $\frac{\sqrt3}{2}$ $cos$(60)$^{\circ}$ = $\frac{1}{2}$ $tan$(60)$^{\circ}$ = $\frac{\sqrt3}{1}$ = $\sqrt3$ $cot$(60)$^{\circ}$ = $\frac{1}{\sqrt3}$ = $\frac{\sqrt3}{3}$ $csc$(60)$^{\circ}$ = $\frac{2}{\sqrt3}$ = $\frac{2\sqrt3}{3}$ $sec$(60)$^{\circ}$ = $\frac{2}{1}$ = 2
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4600244164466858, "perplexity": 2109.0742714425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171632.91/warc/CC-MAIN-20170219104611-00323-ip-10-171-10-108.ec2.internal.warc.gz"}
|
http://askmetips.com/standard-error/standard-error-for-sample-mean.php
|
Home > Standard Error > Standard Error For Sample Mean
# Standard Error For Sample Mean
## Contents
And it turns out, there is. So, in the trial we just did, my wacky distribution had a standard deviation of 9.3. If we magically knew the distribution, there's some true variance here. With statistics, I'm always struggling whether I should be formal in giving you rigorous proofs, but I've come to the conclusion that it's more important to get the working knowledge first my review here
You're just very unlikely to be far away if you took 100 trials as opposed to taking five. When the sampling fraction is large (approximately at 5% or more) in an enumerative study, the estimate of the standard error must be corrected by multiplying by a "finite population correction"[9] The standard deviation is computed solely from sample attributes. Required fields are marked *Comment Name * Email * Website Find an article Search Feel like "cheating" at Statistics?
## Standard Error Of Mean Calculator
The proportion or the mean is calculated using the sample. Popular Articles 1. To understand this, first we need to understand why a sampling distribution is required.
So if this up here has a variance of-- let's say this up here has a variance of 20. Now, if I do that 10,000 times, what do I get? Now, to show that this is the variance of our sampling distribution of our sample mean, we'll write it right here. Standard Error Regression Step 6: Take the square root of the number you found in Step 5.
Because of random variation in sampling, the proportion or mean calculated using the sample will usually differ from the true proportion or mean in the entire population. Standard Error Of Sample Mean Formula So that's my new distribution. So I think you know that, in some way, it should be inversely proportional to n. http://vassarstats.net/dist2.html But let's say we eventually-- all of our samples, we get a lot of averages that are there.
Standard Error of Sample Estimates Sadly, the values of population parameters are often unknown, making it impossible to compute the standard deviation of a statistic. Standard Error Of The Mean Excel With n = 2 the underestimate is about 25%, but for n = 6 the underestimate is only 5%. Then you do it again, and you do another trial. The standard error can be computed from a knowledge of sample attributes - sample size and sample statistics.
## Standard Error Of Sample Mean Formula
The distribution of these 20,000 sample means indicate how far the mean of a sample may be from the true population mean. https://www.khanacademy.org/math/statistics-probability/sampling-distributions-library/sample-means/v/standard-error-of-the-mean The standard error is computed from known sample statistics. Standard Error Of Mean Calculator And if it confuses you, let me know. Standard Error Vs Standard Deviation Because the 5,534 women are the entire population, 23.44 years is the population mean, μ {\displaystyle \mu } , and 3.56 years is the population standard deviation, σ {\displaystyle \sigma }
And then you now also understand how to get to the standard error of the mean.Sampling distribution of the sample mean 2Sampling distribution example problemUp NextSampling distribution example problem menuMinitab® 17 SupportWhat is http://askmetips.com/standard-error/standard-error-of-the-sample.php Then the mean here is also going to be 5. This is your standard deviation. √(68.175) = 8.257 Step 6: Divide the number you calculated in Step 6 by the square root of the sample size (in this sample problem, the Sampling from a distribution with a large standard deviation The first data set consists of the ages of 9,732 women who completed the 2012 Cherry Blossom run, a 10-mile race held Standard Error Of The Mean Definition
I don't necessarily believe you. Step 1:Add up all of the numbers: 12 + 13 + 14 + 16 + 17 + 40 + 43 + 55 + 56 + 67 + 78 + 78 + And we've seen from the last video that, one, if-- let's say we were to do it again. http://askmetips.com/standard-error/standard-deviation-standard-error-sample-size.php Expected Value 9.
The mean age was 23.44 years. Standard Error Mean parameters) and with standard errors you use data from your sample. Working...
## Standard error of mean versus standard deviation In scientific and technical literature, experimental data are often summarized either using the mean and standard deviation or the mean with the standard error.
The standard deviation of the age was 3.56 years. Sample question: If a random sample of size 19 is drawn from a population distribution with standard deviation α = 20 then what will be the variance of the sampling distribution In fact, data organizations often set reliability standards that their data must reach before publication. Standard Error Of Proportion The standard error is important because it is used to compute other measures, like confidence intervals and margins of error.
jbstatistics 82,575 views 7:25 standard error.wmv - Duration: 3:27. Step 2: Divide the variance by the number of items in the sample. It's one of those magical things about mathematics. useful reference And you do it over and over again.
For the runners, the population mean age is 33.87, and the population standard deviation is 9.27. You're becoming more normal, and your standard deviation is getting smaller. Because the 9,732 runners are the entire population, 33.88 years is the population mean, μ {\displaystyle \mu } , and 9.27 years is the population standard deviation, σ. Back to Top How to Find the Sample Mean Watch the video or read the steps below: How to Find the Sample Mean: Overview Dividing the sum by the number of
Use the standard error of the mean to determine how precisely the mean of the sample estimates the population mean. Test Your Understanding Problem 1 Which of the following statements is true.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9593684673309326, "perplexity": 476.3247809015298}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590051.20/warc/CC-MAIN-20180718041450-20180718061450-00108.warc.gz"}
|
https://shapeout.readthedocs.io/en/0.9.6/sec_qg_mixed_effects.html
|
# Comparing datasets with LMM¶
Consider the following datasets. A treatment is applied three times at different time points. For each treatment, a control measurement is performed. For each measurement day, a reservoir measurement is performed additionally for treatment and control.
• Day1:
• one sample, called “Treatment I”, measured at flow rates of 0.04, 0.08 and 0.12 µl/s and one measurement in the reservoir
• one control, called “Control I”, measured at flow rates 0.04, 0.08 and 0.12 µl/s and one measurement in the reservoir
• Day2:
• two samples, called “Treatment II” and “Treatment III”, measured at flow rates 0.04, 0.08 and 0.12 µl/s and one measurement in the reservoir
• two controls, called “Control II” and “Control III”, measured at flow rates 0.04, 0.08 and 0.12 µl/s and one measurement in the reservoir
Linear mixed models (LMM) allow to assign a significance to a treatment (fixed effect) while considering the systematic bias in-between the measurement repetitions (random effect).
We will assume that the datasets are loaded into Shape-Out and that invalid events have been filtered (see e.g. Excluding invalid events). The Analyze configuration tab enables the comparison of an experiment (control and treatment) and repetitions of the experiment using LMM [HKP+17], [HMMO18].
• Basic analysis:
Assign which measurement is a control and which is a treatment by choosing the option in the dropdown lists under Interpretation. Group the pairs of control and treatment done in one experiment, by choosing an index number, called Repetition. Here, Treatment I and Control I are one experiment – called Repetition 1, Treatment II and Control II are a repetition of the experiment – called Repetition 2, Treatment III and Control III are another repetition of the experiment – called Repetition 3.
Press Apply to start the calculations. A text file will open to show the results.
The most important numbers are:
• Fixed effects:
(Intercept)-Estimate
The mean of the parameter chosen for all controls.
treatment-Estimate
The effect size of the parameter chosen between the mean of all controls and the mean of all treatments.
• Full coefficient table: Shows the effect size of the parameter chosen between control and treatment for every single experiment.
• Model-Pr(>Chisq): Shows the p-value and the significance of the test.
• Differential feature analysis:
The LMM analysis is only applicable if the respective measurements show little difference in the reservoir for the feature chosen. For instance, if a treatment results in non-spherical cells in the reservoir, then the deformation recorded for the treatment might be biased towards higher values. In this case, the information of the reservoir measurement has to be included by means of the differential deformation [HMMO18]. This can be achieved by selecting the respective reservoir measurements in the dropdown menu.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9380010962486267, "perplexity": 2354.8401471480947}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540514893.41/warc/CC-MAIN-20191208202454-20191208230454-00308.warc.gz"}
|
http://www.distance-calculator.co.uk/towns-within-a-radius-of.php?t=Uitgeest&c=Netherlands
|
Cities, Towns and Places within a 60 mile radius of Uitgeest, Netherlands
Get a list of towns within a 60 mile radius of Uitgeest or between two set distances, click on the markers in the satelitte map to get maps and road trip directions. If this didn't quite work how you thought then you might like the Places near Uitgeest map tool (beta).
The radius entered failed to produce results - we reset it to a max of 60 miles - apologies for any inconvenience
# Distance Calculator > World Distances > Radius Distances > Uitgeest distance calculator > Uitgeest (Netherlands ) Radius distances
Get towns between and Miles KM
Click to View Visual Places on Map or View Visual Radius on Map
miles
Showing 0100 places between 10 and 60 miles of Uitgeest
(Increase the miles radius to get more places returned *)
< 25 miles > 25 miles
Bergen Aan Zee, Netherlands is 10 miles away
Bloemendaal, Netherlands is 10 miles away
Broek Op Langedijk, Netherlands is 10 miles away
Den Ilp, Netherlands is 10 miles away
Haarlemmerliede, Netherlands is 10 miles away
Halfweg, Netherlands is 10 miles away
Noordbeemster, Netherlands is 10 miles away
Overveen, Netherlands is 10 miles away
Purmerend, Netherlands is 10 miles away
Purmerland, Netherlands is 10 miles away
Rustenburg, Netherlands is 10 miles away
Sint Pancras, Netherlands is 10 miles away
Ursem, Netherlands is 10 miles away
Wijdewormer, Netherlands is 10 miles away
Zuidoostbeemster, Netherlands is 10 miles away
Zwanenburg, Netherlands is 10 miles away
Avenhorn, Netherlands is 11 miles away
Geuzenveld, Netherlands is 11 miles away
Heerhugowaard, Netherlands is 11 miles away
Hensbroek, Netherlands is 11 miles away
Hobrede, Netherlands is 11 miles away
Ilpendam, Netherlands is 11 miles away
Kwadijk, Netherlands is 11 miles away
Landsmeer, Netherlands is 11 miles away
Oudendijk, Netherlands is 11 miles away
Purmer, Netherlands is 11 miles away
Slotermeer, Netherlands is 11 miles away
Tuindorp, Netherlands is 11 miles away
Zuid-scharwoude, Netherlands is 11 miles away
Beets, Netherlands is 12 miles away
Boesingheliede, Netherlands is 12 miles away
Bos En Lommer, Netherlands is 12 miles away
De Goorn, Netherlands is 12 miles away
Goorn, Netherlands is 12 miles away
Haarlem, Netherlands is 12 miles away
Langedijk, Netherlands is 12 miles away
Nieuwe Brug, Netherlands is 12 miles away
Noord-scharwoude, Netherlands is 12 miles away
Oosthuizen, Netherlands is 12 miles away
Osdorp, Netherlands is 12 miles away
Oudkarspel, Netherlands is 12 miles away
Schalkwijk, Netherlands is 12 miles away
Schoorl, Netherlands is 12 miles away
Schoorldam, Netherlands is 12 miles away
Sloterdijk, Netherlands is 12 miles away
Watergang, Netherlands is 12 miles away
Wogmeer, Netherlands is 12 miles away
Aerdenhout, Netherlands is 13 miles away
Axwijk, Netherlands is 13 miles away
Bentveld, Netherlands is 13 miles away
De Jordaan, Netherlands is 13 miles away
Groet, Netherlands is 13 miles away
Heemstede, Netherlands is 13 miles away
Lijnden, Netherlands is 13 miles away
Middelie, Netherlands is 13 miles away
Obdam, Netherlands is 13 miles away
Overtoomseveld, Netherlands is 13 miles away
Schardam, Netherlands is 13 miles away
Schouw, Netherlands is 13 miles away
Sloten, Netherlands is 13 miles away
Spierdijk, Netherlands is 13 miles away
Vijfhuizen, Netherlands is 13 miles away
Warden, Netherlands is 13 miles away
Warder, Netherlands is 13 miles away
Badhoevedorp, Netherlands is 14 miles away
Berkhout, Netherlands is 14 miles away
Bobeldijk, Netherlands is 14 miles away
Broek, Netherlands is 14 miles away
Broek In Waterland, Netherlands is 14 miles away
Camp, Netherlands is 14 miles away
Cruquius, Netherlands is 14 miles away
Edam, Netherlands is 14 miles away
Eenigenburg, Netherlands is 14 miles away
Harenkarspel, Netherlands is 14 miles away
Krabbendam, Netherlands is 14 miles away
Monnickendam, Netherlands is 14 miles away
Monnikendam, Netherlands is 14 miles away
Nieuwe Meer, Netherlands is 14 miles away
Oude-niedorp, Netherlands is 14 miles away
Scharwoude, Netherlands is 14 miles away
Slotervaart, Netherlands is 14 miles away
Tuitjenhorn, Netherlands is 14 miles away
Veenhuizen, Netherlands is 14 miles away
Verlaat, Netherlands is 14 miles away
Warmenhuizen, Netherlands is 14 miles away
Zandvoort, Netherlands is 14 miles away
Zedde, Netherlands is 14 miles away
Zuidermeer, Netherlands is 14 miles away
Zunderdorp, Netherlands is 14 miles away
t Veld, Netherlands is 15 miles away
Amsterdam, Netherlands is 15 miles away
Burgerbrug, Netherlands is 15 miles away
Dirkshorn, Netherlands is 15 miles away
Katham, Netherlands is 15 miles away
Katwoude, Netherlands is 15 miles away
Munnikeveld, Netherlands is 15 miles away
Noordermeer, Netherlands is 15 miles away
Opmeer, Netherlands is 15 miles away
Ransdorp, Netherlands is 15 miles away
Spanbroek, Netherlands is 15 miles away
Click to See place names or View Visual Places on Map
Click to go to the top or View Visual Radius on Map
World Distances
Need to calculate a distance for Uitgeest, Netherlands - use this Uitgeest distance calculator.
To view distances for Netherlands alone this Netherlands distance calculator
If you have a question relating to this area then we'd love to hear it! Chec out our facebook, G+ or Twitter pages above!
Don't forget you can increase the radius in the tool above to 50, 100 or 1000 miles to get a list of towns or cities that are in the vicinity of or are local to Uitgeest. You can also specify a list of towns or places that you want returned between two distances in both Miles(mi) or Kilometres (km) .
Europe Distances
* results returned are limited for each query
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9625992178916931, "perplexity": 15076.285504951602}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178368431.60/warc/CC-MAIN-20210304021339-20210304051339-00371.warc.gz"}
|
https://math.libretexts.org/TextMaps/Precalculus/Map%3A_Precalculus_(Stitz-Zeager)/10%3A_Foundations_of_Trigonometry/10.4%3A_Trigonometric_Identities
|
$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$
# 10.4: Trigonometric Identities
$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$
$$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$
In Section \ref{CircularFunctions}, we saw the utility of the Pythagorean Identities in Theorem \ref{pythids} along with the Quotient and Reciprocal Identities in Theorem \ref{recipquotid}. Not only did these identities help us compute the values of the circular functions for angles, they were also useful in simplifying expressions involving the circular functions. In this section, we introduce several collections of identities which have uses in this course and beyond. Our first set of identities is the Even / Odd' identities.\footnote{As mentioned at the end of Section \ref{TheUnitCircle}, properties of the circular functions when thought of as functions of angles in radian measure hold equally well if we view these functions as functions of real numbers. Not surprisingly, the Even / Odd properties of the circular functions are so named because they identify cosine and secant as even functions, while the remaining four circular functions are odd. (See Section \ref{GraphsofFunctions}.)}
Note: Even / Odd Identities
For all applicable angles $$\theta$$:
• $$\cos(-\theta) = \cos(\theta)$$
• $$\sec(-\theta) = \sec(\theta)$$
• $$\sin(-\theta) = -\sin(\theta)$$
• $$\csc(-\theta) = -\csc(\theta)$$
• $$\tan(-\theta) = -\tan(\theta)$$
• $$\cot(-\theta) = -\cot(\theta)$$
In light of the Quotient and Reciprocal Identities, Theorem \ref{recipquotid}, it suffices to show $$\cos(-\theta) = \cos(\theta)$$ and $$\sin(-\theta) = -\sin(\theta)$$. The remaining four circular functions can be expressed in terms of $$\cos(\theta)$$ and $$\sin(\theta)$$ so the proofs of their Even / Odd Identities are left as exercises. Consider an angle $$\theta$$ plotted in standard position. Let $$\theta_ { o}$$ be the angle coterminal with $$\theta$$ with $$0 \leq \theta_ { o} < 2\pi$$. (We can construct the angle $$\theta_ { o}$$ by rotating counter-clockwise from the positive $$x$$-axis to the terminal side of $$\theta$$ as pictured below.) Since $$\theta$$ and $$\theta_ { o}$$ are coterminal, $$\cos(\theta) = \cos(\theta_ { o})$$ and $$\sin(\theta) = \sin(\theta_ { o})$$.
We now consider the angles $$-\theta$$ and $$-\theta_ { o}$$. Since $$\theta$$ is coterminal with $$\theta_ { o}$$, there is some integer $$k$$ so that $$\theta = \theta_ { o} + 2\pi \cdot k$$. Therefore, $$-\theta = -\theta_ { o} - 2\pi \cdot k = -\theta_ { o} + 2\pi \cdot(-k)$$. Since $$k$$ is an integer, so is $$(-k)$$, which means $$-\theta$$ is coterminal with $$-\theta_ { o}$$. Hence, $$\cos(-\theta) = \cos(-\theta_ { o})$$ and $$\sin(-\theta) = \sin(-\theta_ { o})$$. Let $$P$$ and $$Q$$ denote the points on the terminal sides of $$\theta_ { o}$$ and $$-\theta_ { o}$$, respectively, which lie on the Unit Circle. By definition, the coordinates of $$P$$ are $$(\cos(\theta_ { o}),\sin(\theta_ { o}))$$ and the coordinates of $$Q$$ are $$(\cos(-\theta_ { o}),\sin(-\theta_ { o}))$$. Since $$\theta_ { o}$$ and $$-\theta_ { o}$$ sweep out congruent central sectors of the Unit Circle, it follows that the points $$P$$ and $$Q$$ are symmetric about the $$x$$-axis. Thus, $$\cos(-\theta_ { o}) = \cos(\theta_ { o})$$ and $$\sin(-\theta_ { o}) = -\sin(\theta_ { o})$$. Since the cosines and sines of $$\theta_ { o}$$ and $$-\theta_ { o}$$ are the same as those for $$\theta$$ and $$-\theta$$, respectively, we get $$\cos(-\theta) = \cos(\theta)$$ and $$\sin(-\theta) = -\sin(\theta)$$, as required. The Even / Odd Identities are readily demonstrated using any of the common angles' noted in Section \ref{TheUnitCircle}. Their true utility, however, lies not in computation, but in simplifying expressions involving the circular functions. In fact, our next batch of identities makes heavy use of the Even / Odd Identities.
Note: Sum and Difference Identities for Cosine
For all angles $$\alpha$$ and $$\beta$$:
• $$\cos(\alpha + \beta) = \cos(\alpha) \cos(\beta) - \sin(\alpha) \sin(\beta)$$
• $$\cos(\alpha - \beta) = \cos(\alpha) \cos(\beta) + \sin(\alpha) \sin(\beta)$$
We first prove the result for differences. As in the proof of the Even / Odd Identities, we can reduce the proof for general angles $$\alpha$$ and $$\beta$$ to angles $$\alpha_ { o}$$ and $$\beta_ { o}$$, coterminal with $$\alpha$$ and $$\beta$$, respectively, each of which measure between $$0$$ and $$2\pi$$ radians. Since $$\alpha$$ and $$\alpha_ { o}$$ are coterminal, as are $$\beta$$ and $$\beta_ { o}$$, it follows that $$\alpha - \beta$$ is coterminal with $$\alpha_ { o} - \beta_ { o}$$. Consider the case below where $$\alpha_ { o} \geq \beta_ { o}$$.
Since the angles $$POQ$$ and $$AOB$$ are congruent, the distance between $$P$$ and $$Q$$ is equal to the distance between $$A$$ and $$B$$.\footnote{In the picture we've drawn, the \underline{tri}angles $$POQ$$ and $$AOB$$ are congruent, which is even better. However, $$\alpha_ { o} - \beta_ { o}$$ could be $$0$$ or it could be $$\pi$$, neither of which makes a triangle. It could also be larger than $$\pi$$, which makes a triangle, just not the one we've drawn. You should think about those three cases.} The distance formula, Equation \ref{distanceformula}, yields
$\begin{array}{rcl} \sqrt{(\cos(\alpha_ { o}) - \cos(\beta_ { o}))^2 + (\sin(\alpha_ { o}) - \sin(\beta_ { o}))^2 } & = & \sqrt{(\cos(\alpha_ { o} - \beta_ { o}) - 1)^2 + (\sin(\alpha_ { o} - \beta_ { o}) - 0)^2} \\ \end{array}$
Squaring both sides, we expand the left hand side of this equation as
$\begin{array}{rcl} (\cos(\alpha_ { o}) - \cos(\beta_ { o}))^2 + (\sin(\alpha_ { o}) - \sin(\beta_ { o}))^2 & = & \cos^2(\alpha_ { o}) - 2\cos(\alpha_ { o})\cos(\beta_ { o}) + \cos^2(\beta_ { o}) \\ & & + \sin^2(\alpha_ { o}) - 2\sin(\alpha_ { o})\sin(\beta_ { o}) + \sin^2(\beta_ { o}) \\ & = & \cos^2(\alpha_ { o}) + \sin^2(\alpha_ { o}) + \cos^2(\beta_ { o}) + \sin^2(\beta_ { o}) \\ & & - 2\cos(\alpha_ { o})\cos(\beta_ { o}) - 2\sin(\alpha_ { o})\sin(\beta_ { o}) \end{array}$
From the Pythagorean Identities, $$\cos^2(\alpha_ { o}) + \sin^2(\alpha_ { o}) = 1$$ and $$\cos^2(\beta_ { o}) + \sin^2(\beta_ { o}) = 1$$, so
$\begin{array}{rcl} (\cos(\alpha_ { o}) - \cos(\beta_ { o}))^2 + (\sin(\alpha_ { o}) - \sin(\beta_ { o}))^2 & = & 2 - 2\cos(\alpha_ { o})\cos(\beta_ { o}) - 2\sin(\alpha_ { o})\sin(\beta_ { o}) \end{array}$
Turning our attention to the right hand side of our equation, we find
$\begin{array}{rcl} (\cos(\alpha_ { o} - \beta_ { o}) - 1)^2 + (\sin(\alpha_ { o} - \beta_ { o}) - 0)^2 & = & \cos^2(\alpha_ { o} - \beta_ { o}) - 2\cos(\alpha_ { o} - \beta_ { o}) + 1 + \sin^2(\alpha_ { o} - \beta_ { o}) \\ & = & 1 + \cos^2(\alpha_ { o} - \beta_ { o}) + \sin^2(\alpha_ { o} - \beta_ { o}) - 2\cos(\alpha_ { o} - \beta_ { o}) \\ \end{array}$
Once again, we simplify $$\cos^2(\alpha_ { o} - \beta_ { o}) + \sin^2(\alpha_ { o} - \beta_ { o})= 1$$, so that
$\begin{array}{rcl} (\cos(\alpha_ { o} - \beta_ { o}) - 1)^2 + (\sin(\alpha_ { o} - \beta_ { o}) - 0)^2 & = & 2 - 2\cos(\alpha_ { o} - \beta_ { o}) \\ \end{array}$
Putting it all together, we get $$2 - 2\cos(\alpha_ { o})\cos(\beta_ { o}) - 2\sin(\alpha_ { o})\sin(\beta_ { o}) = 2 - 2\cos(\alpha_ { o} - \beta_ { o})$$, which simplifies to: $$\cos(\alpha_ { o} - \beta_ { o}) = \cos(\alpha_ { o})\cos(\beta_ { o}) + \sin(\alpha_ { o})\sin(\beta_ { o})$$. Since $$\alpha$$ and $$\alpha_ { o}$$, $$\beta$$ and $$\beta_ { o}$$ and $$\alpha - \beta$$ and $$\alpha_ { o}- \beta_ { o}$$ are all coterminal pairs of angles, we have $$\cos(\alpha - \beta) = \cos(\alpha) \cos(\beta) + \sin(\alpha) \sin(\beta)$$. For the case where $$\alpha_ { o} \leq \beta_ { o}$$, we can apply the above argument to the angle $$\beta_ { o} - \alpha_ { o}$$ to obtain the identity $$\cos(\beta_ { o} - \alpha_ { o}) = \cos(\beta_ { o})\cos(\alpha_ { o}) + \sin(\beta_ { o})\sin(\alpha_ { o})$$. Applying the Even Identity of cosine, we get $$\cos(\beta_ { o} - \alpha_ { o}) = \cos( - (\alpha_ { o} - \beta_ { o})) = \cos(\alpha_ { o} - \beta_ { o})$$, and we get the identity in this case, too.
To get the sum identity for cosine, we use the difference formula along with the Even/Odd Identities
$\cos(\alpha + \beta) = \cos(\alpha - (-\beta)) = \cos(\alpha) \cos(-\beta) + \sin(\alpha) \sin(-\beta) = \cos(\alpha) \cos(\beta) - \sin(\alpha) \sin(\beta)$
We put these newfound identities to good use in the following example.
Example $$\PageIndex{1}$$: Cosine Sum and Difference
1. Find the exact value of $$\cos\left(15^{\circ}\right)$$.
2. Verify the identity: $$\cos\left(\frac{\pi}{2} - \theta\right) = \sin(\theta)$$.
Solution
1. In order to use Theorem \ref{cosinesumdifference} to find $$\cos\left(15^{\circ}\right)$$, we need to write $$15^{\circ}$$ as a sum or difference of angles whose cosines and sines we know. One way to do so is to write $$15^{\circ} = 45^{\circ} - 30^{\circ}$$.
$\begin{array}{rcl} \cos\left(15^{\circ}\right) & = & \cos\left(45^{\circ} - 30^{\circ} \right) \\ & = & \cos\left(45^{\circ}\right)\cos\left(30^{\circ} \right) + \sin\left(45^{\circ}\right)\sin\left(30^{\circ} \right) \\ & = & \left( \dfrac{\sqrt{2}}{2} \right)\left( \dfrac{\sqrt{3}}{2} \right) + \left( \dfrac{\sqrt{2}}{2} \right)\left( \dfrac{1}{2} \right)\\ & = & \dfrac{\sqrt{6}+ \sqrt{2}}{4} \\ \end{array}$
1. In a straightforward application of Theorem \ref{cosinesumdifference}, we find
$\begin{array}{rcl} \cos\left(\dfrac{\pi}{2} - \theta\right) & = & \cos\left(\dfrac{\pi}{2}\right)\cos\left(\theta\right) + \sin\left(\dfrac{\pi}{2}\right)\sin\left(\theta \right) \\ & = & \left( 0 \right)\left( \cos(\theta) \right) + \left( 1 \right)\left( \sin(\theta) \right) \\ & = & \sin(\theta) \\ \end{array}$
The identity verified in Example $$\PageIndex{1}$$, namely, $$\cos\left(\frac{\pi}{2} - \theta\right) = \sin(\theta)$$, is the first of the celebrated cofunction' identities. These identities were first hinted at in Exercise \ref{cofunctionforeshadowing} in Section \ref{TheUnitCircle}. From $$\sin(\theta) = \cos\left(\frac{\pi}{2} - \theta\right)$$, we get:
$\sin\left(\dfrac{\pi}{2} - \theta\right) = \cos\left(\dfrac{\pi}{2} -\left[\dfrac{\pi}{2} - \theta\right]\right) = \cos(\theta),$
which says, in words, that the co'sine of an angle is the sine of its co'mplement. Now that these identities have been established for cosine and sine, the remaining circular functions follow suit. The remaining proofs are left as exercises.
Note: Cofunction Identities
For all applicable angles $$\theta$$:
• $$\cos\left(\dfrac{\pi}{2} - \theta \right) = \sin(\theta)$$
• $$\sin\left(\dfrac{\pi}{2} - \theta \right) = \cos(\theta)$$
• $$\sec\left(\dfrac{\pi}{2} - \theta \right) = \csc(\theta)$$
• $$\csc\left(\dfrac{\pi}{2} - \theta \right) = \sec(\theta)$$
• $$\tan\left(\dfrac{\pi}{2} - \theta \right) = \cot(\theta)$$
• $$\cot\left(\dfrac{\pi}{2} - \theta \right) = \tan(\theta)$$
With the Cofunction Identities in place, we are now in the position to derive the sum and difference formulas for sine. To derive the sum formula for sine, we convert to cosines using a cofunction identity, then expand using the difference formula for cosine
$\begin{array}{rcl} \sin(\alpha + \beta) & = & \cos\left( \dfrac{\pi}{2} - (\alpha + \beta) \right) \\ & = & \cos\left( \left[\dfrac{\pi}{2} - \alpha \right] - \beta \right) \\ & = & \cos\left(\dfrac{\pi}{2} - \alpha \right) \cos(\beta) + \sin\left(\dfrac{\pi}{2} - \alpha \right)\sin(\beta) \\ & = & \sin(\alpha) \cos(\beta) + \cos(\alpha) \sin(\beta) \\ \end{array}$
We can derive the difference formula for sine by rewriting $$\sin(\alpha - \beta)$$ as $$\sin(\alpha + (-\beta))$$ and using the sum formula and the Even / Odd Identities. Again, we leave the details to the reader.
Sum and Difference Identities for Sine
For all angles $$\alpha$$ and $$\beta$$, \index{Difference Identity ! for sine} \index{Sum Identity ! for sine}
• $$\sin(\alpha + \beta) = \sin(\alpha) \cos(\beta) + \cos(\alpha) \sin(\beta)$$
• $$\sin(\alpha - \beta) = \sin(\alpha) \cos(\beta) - \cos(\alpha) \sin(\beta)$$
Example $$\PageIndex{1}$$:
1. Find the exact value of $$\sin\left(\frac{19 \pi}{12}\right)$$
2. If $$\alpha$$ is a Quadrant II angle with $$\sin(\alpha) = \frac{5}{13}$$, and $$\beta$$ is a Quadrant III angle with $$\tan(\beta) = 2$$, find $$\sin(\alpha - \beta)$$.
3. Derive a formula for $$\tan(\alpha + \beta)$$ in terms of $$\tan(\alpha)$$ and $$\tan(\beta)$$.
Solution
1. As in Example \ref{cosinesumdiffex}, we need to write the angle $$\frac{19 \pi}{12}$$ as a sum or difference of common angles. The denominator of $$12$$ suggests a combination of angles with denominators $$3$$ and $$4$$. One such combination is $$\; \frac{19 \pi}{12} = \frac{4 \pi}{3} + \frac{\pi}{4}$$. Applying Theorem \ref{sinesumdifference}, we get
$\begin{array}{rcl} \sin\left(\dfrac{19 \pi}{12}\right) & = & \sin\left(\dfrac{4 \pi}{3} + \dfrac{\pi}{4} \right) \\ & = & \sin\left(\dfrac{4 \pi}{3} \right)\cos\left(\dfrac{\pi}{4} \right) + \cos\left(\dfrac{4 \pi}{3} \right)\sin\left(\dfrac{\pi}{4} \right) \\ & = & \left( -\dfrac{\sqrt{3}}{2} \right)\left( \dfrac{\sqrt{2}}{2} \right) + \left( -\dfrac{1}{2} \right)\left( \dfrac{\sqrt{2}}{2} \right) \\ & = & \dfrac{-\sqrt{6}- \sqrt{2}}{4} \\ \end{array}$
1. In order to find $$\sin(\alpha - \beta)$$ using Theorem \ref{sinesumdifference}, we need to find $$\cos(\alpha)$$ and both $$\cos(\beta)$$ and $$\sin(\beta)$$. To find $$\cos(\alpha)$$, we use the Pythagorean Identity $$\cos^2(\alpha) + \sin^2(\alpha) = 1$$. Since $$\sin(\alpha) = \frac{5}{13}$$, we have $$\cos^{2}(\alpha) + \left(\frac{5}{13}\right)^2 = 1$$, or $$\cos(\alpha) = \pm \frac{12}{13}$$. Since $$\alpha$$ is a Quadrant II angle, $$\cos(\alpha) = -\frac{12}{13}$$. We now set about finding $$\cos(\beta)$$ and $$\sin(\beta)$$. We have several ways to proceed, but the Pythagorean Identity $$1 + \tan^{2}(\beta) = \sec^{2}(\beta)$$ is a quick way to get $$\sec(\beta)$$, and hence, $$\cos(\beta)$$. With $$\tan(\beta) = 2$$, we get $$1 + 2^2 = \sec^{2}(\beta)$$ so that $$\sec(\beta) = \pm \sqrt{5}$$. Since $$\beta$$ is a Quadrant III angle, we choose $$\sec(\beta) = -\sqrt{5}$$ so $$\cos(\beta) = \frac{1}{\sec(\beta)} = \frac{1}{-\sqrt{5}} = -\frac{\sqrt{5}}{5}$$. We now need to determine $$\sin(\beta)$$. We could use The Pythagorean Identity $$\cos^{2}(\beta) + \sin^{2}(\beta) = 1$$, but we opt instead to use a quotient identity. From $$\tan(\beta) = \frac{\sin(\beta)}{\cos(\beta)}$$, we have $$\sin(\beta) = \tan(\beta) \cos(\beta)$$ so we get $$\sin(\beta) = (2) \left( -\frac{\sqrt{5}}{5}\right) = - \frac{2 \sqrt{5}}{5}$$. We now have all the pieces needed to find $$\sin(\alpha - \beta)$$:
$\begin{array}{rcl} \sin(\alpha - \beta) & = & \sin(\alpha)\cos(\beta) - \cos(\alpha)\sin(\beta) \\ & = & \left( \dfrac{5}{13} \right)\left( -\dfrac{\sqrt{5}}{5} \right) - \left( -\dfrac{12}{13} \right)\left( - \dfrac{2 \sqrt{5}}{5} \right) \\ & = & -\dfrac{29\sqrt{5}}{65} \\ \end{array}$
We can start expanding $$\tan(\alpha + \beta)$$ using a quotient identity and our sum formulas
$\begin{array}{rcl} \tan(\alpha + \beta) & = & \dfrac{\sin(\alpha + \beta)}{\cos(\alpha + \beta)} \\ & = & \dfrac{\sin(\alpha) \cos(\beta) + \cos(\alpha) \sin(\beta)}{\cos(\alpha) \cos(\beta) - \sin(\alpha) \sin(\beta)} \\ \end{array}$
Since $$\tan(\alpha) = \frac{\sin(\alpha)}{\cos(\alpha)}$$ and $$\tan(\beta) = \frac{\sin(\beta)}{\cos(\beta)}$$, it looks as though if we divide both numerator and denominator by $$\cos(\alpha) \cos(\beta)$$ we will have what we want
$\begin{array}{rcl} \tan(\alpha + \beta) & = & \dfrac{\sin(\alpha) \cos(\beta) + \cos(\alpha) \sin(\beta)}{\cos(\alpha) \cos(\beta) - \sin(\alpha) \sin(\beta)} \cdot\dfrac{\dfrac{1}{\cos(\alpha) \cos(\beta)}}{\dfrac{1}{\cos(\alpha) \cos(\beta)}}\\ & & \\ & = & \dfrac{\dfrac{\sin(\alpha) \cos(\beta)}{\cos(\alpha) \cos(\beta)} + \dfrac{\cos(\alpha) \sin(\beta)}{\cos(\alpha) \cos(\beta)}}{\dfrac{\cos(\alpha) \cos(\beta)}{\cos(\alpha) \cos(\beta)} - \dfrac{\sin(\alpha) \sin(\beta)}{\cos(\alpha) \cos(\beta)}}\\ & & \\ & = & \dfrac{\dfrac{\sin(\alpha) \cancel{\cos(\beta)}}{\cos(\alpha) \cancel{\cos(\beta)}} + \dfrac{\cancel{\cos(\alpha)} \sin(\beta)}{\cancel{\cos(\alpha)} \cos(\beta)}}{\dfrac{\cancel{\cos(\alpha)} \cancel{\cos(\beta)}}{\cancel{\cos(\alpha)} \cancel{\cos(\beta)}} - \dfrac{\sin(\alpha) \sin(\beta)}{\cos(\alpha) \cos(\beta)}}\\ & & \\ & = & \dfrac{\tan(\alpha) + \tan(\beta)}{1 -\tan(\alpha) \tan(\beta)}\\ \end{array}$
Naturally, this formula is limited to those cases where all of the tangents are defined.\qed
The formula developed in Exercise \ref{sinesumanddiffex} for $$\tan(\alpha + \beta)$$ can be used to find a formula for $$\tan(\alpha - \beta)$$ by rewriting the difference as a sum, $$\tan(\alpha + (-\beta))$$, and the reader is encouraged to fill in the details. Below we summarize all of the sum and difference formulas for cosine, sine and tangent.
Note
Sum and Difference Identities:} For all applicable angles $$\alpha$$ and $$\beta$$, \index{Difference Identity ! for tangent} \index{Sum Identity ! for tangent} \index{Difference Identity ! for cosine} \index{Sum Identity ! for cosine} \index{Difference Identity ! for sine} \index{Sum Identity ! for sine}
• $$\cos(\alpha \pm \beta) = \cos(\alpha) \cos(\beta) \mp \sin(\alpha) \sin(\beta)$$
• $$\sin(\alpha \pm \beta) = \sin(\alpha) \cos(\beta) \pm \cos(\alpha) \sin(\beta)$$
• $$\tan(\alpha \pm \beta) = \dfrac{\tan(\alpha) \pm \tan(\beta)}{1 \mp \tan(\alpha) \tan(\beta)}$$
In the statement of Theorem \ref{circularsumdifference}, we have combined the cases for the sum $+$' and difference \)-$' of angles into one formula. The convention here is that if you want the formula for the sum $+$' of two angles, you use the top sign in the formula; for the difference, \)-$', use the bottom sign. For example, $\tan(\alpha - \beta) = \dfrac{\tan(\alpha) - \tan(\beta)}{1 + \tan(\alpha) \tan(\beta)}$
If we specialize the sum formulas in Theorem \ref{circularsumdifference} to the case when $$\alpha = \beta$$, we obtain the following Double Angle' Identities.
Note Double Angle Identities
For all applicable angles $$\theta$$, \index{Double Angle Identities}
• $$\cos(2\theta) = \left\{ \begin{array}{l} \cos^{2}(\theta) - \sin^{2}(\theta)\\ [5pt] 2\cos^{2}(\theta) - 1 \\ [5pt] 1-2\sin^{2}(\theta) \end{array} \right.$$
• $$\sin(2\theta) = 2\sin(\theta)\cos(\theta)$$
• $$\tan(2\theta) = \dfrac{2\tan(\theta)}{1 - \tan^{2}(\theta)}$$
The three different forms for $$\cos(2\theta)$$ can be explained by our ability to exchange' squares of cosine and sine via the Pythagorean Identity $$\cos^{2}(\theta) + \sin^{2}(\theta) = 1$$ and we leave the details to the reader. It is interesting to note that to determine the value of $$\cos(2\theta)$$, only \textit{one} piece of information is required: either $$\cos(\theta)$$ or $$\sin(\theta)$$. To determine $$\sin(2\theta)$$, however, it appears that we must know both $$\sin(\theta)$$ and $$\cos(\theta)$$. In the next example, we show how we can find $$\sin(2\theta)$$ knowing just one piece of information, namely $$\tan(\theta)$$.
Example $$\PageIndex{1}$$:
1. Suppose $$P(-3,4)$$ lies on the terminal side of $$\theta$$ when $$\theta$$ is plotted in standard position. Find $$\cos(2\theta)$$ and $$\sin(2\theta)$$ and determine the quadrant in which the terminal side of the angle $$2\theta$$ lies when it is plotted in standard position.
2. If $$\sin(\theta) = x$$ for $$-\frac{\pi}{2} \leq \theta \leq \frac{\pi}{2}$$, find an expression for $$\sin(2\theta)$$ in terms of $$x$$.
3. \label{doubleanglesinewtan} Verify the identity: $$\sin(2\theta) = \dfrac{2\tan(\theta)}{1 + \tan^{2}(\theta)}$$.
4. Express $$\cos(3\theta)$$ as a polynomial in terms of $$\cos(\theta)$$.
Solution
1. Using Theorem \ref{cosinesinecircle} from Section \ref{TheUnitCircle} with $$x = -3$$ and $$y=4$$, we find $$r = \sqrt{x^2+y^2} = 5$$. Hence, $$\cos(\theta) = -\frac{3}{5}$$ and $$\sin(\theta) = \frac{4}{5}$$. Applying Theorem \ref{doubleangle}, we get $$\cos(2\theta) = \cos^{2}(\theta) - \sin^{2}(\theta) = \left(-\frac{3}{5}\right)^2 - \left(\frac{4}{5}\right)^2 = -\frac{7}{25}$$, and $$\sin(2\theta) = 2 \sin(\theta) \cos(\theta) = 2 \left(\frac{4}{5}\right)\left(-\frac{3}{5}\right) = -\frac{24}{25}$$. Since both cosine and sine of $$2\theta$$ are negative, the terminal side of $$2\theta$$, when plotted in standard position, lies in Quadrant III.
2. If your first reaction to $\sin(\theta) = x$' is No it's not, $$\cos(\theta) = x!' then you have indeed learned something, and we take comfort in that. However, context is everything. Here, x' is just a variable - it does not necessarily represent the \(x$$-coordinate of the point on The Unit Circle which lies on the terminal side of $$\theta$$, assuming $$\theta$$ is drawn in standard position. Here, $$x$$ represents the quantity $$\sin(\theta)$$, and what we wish to know is how to express $$\sin(2\theta)$$ in terms of $$x$$. We will see more of this kind of thing in Section \ref{ArcTrig}, and, as usual, this is something we need for Calculus. Since $$\sin(2\theta) = 2 \sin(\theta) \cos(\theta)$$, we need to write $$\cos(\theta)$$ in terms of $$x$$ to finish the problem. We substitute $$x = \sin(\theta)$$ into the Pythagorean Identity, $$\cos^{2}(\theta) + \sin^{2}(\theta) = 1$$, to get $$\cos^{2}(\theta) + x^2 = 1$$, or $$\cos(\theta) = \pm \sqrt{1-x^2}$$. Since $$-\frac{\pi}{2} \leq \theta \leq \frac{\pi}{2}$$, $$\cos(\theta) \geq 0$$, and thus $$\cos(\theta) = \sqrt{1-x^2}$$. Our final answer is $$\sin(2\theta) = 2 \sin(\theta) \cos(\theta) = 2x\sqrt{1-x^2}$$.
We start with the right hand side of the identity and note that $$1 + \tan^{2}(\theta) = \sec^{2}(\theta)$$. From this point, we use the Reciprocal and Quotient Identities to rewrite $$\tan(\theta)$$ and $$\sec(\theta)$$ in terms of $$\cos(\theta)$$ and $$\sin(\theta)$$:
$\begin{array}{rcl} \dfrac{2\tan(\theta)}{1 + \tan^{2}(\theta)} & = & \dfrac{2\tan(\theta)}{\sec^{2}(\theta)}= \dfrac{2 \left( \dfrac{\sin(\theta)}{\cos(\theta)}\right)}{\dfrac{1}{\cos^{2}(\theta)}}= 2\left( \dfrac{\sin(\theta)}{\cos(\theta)}\right) \cos^{2}(\theta) \\ & = & 2\left( \dfrac{\sin(\theta)}{\cancel{\cos(\theta)}}\right) \cancel{\cos(\theta)} \cos(\theta) = 2\sin(\theta) \cos(\theta) = \sin(2\theta) \\ \end{array}$
1. In Theorem \ref{doubleangle}, one of the formulas for $$\cos(2\theta)$$, namely $$\cos(2\theta) = 2\cos^{2}(\theta) - 1$$, expresses $$\cos(2\theta)$$ as a polynomial in terms of $$\cos(\theta)$$. We are now asked to find such an identity for $$\cos(3\theta)$$. Using the sum formula for cosine, we begin with
$\begin{array}{rcl} \cos(3\theta) & = & \cos(2\theta + \theta) \\ & = & \cos(2\theta)\cos(\theta) - \sin(2\theta)\sin(\theta) \\ \end{array}$
Our ultimate goal is to express the right hand side in terms of $$\cos(\theta)$$ only. We substitute $$\cos(2\theta) = 2\cos^{2}(\theta) -1$$ and $$\sin(2\theta) = 2\sin(\theta)\cos(\theta)$$ which yields
$\begin{array}{rcl} \cos(3\theta) & = & \cos(2\theta)\cos(\theta) - \sin(2\theta)\sin(\theta) \\ & = & \left(2\cos^{2}(\theta) - 1\right) \cos(\theta) - \left(2 \sin(\theta) \cos(\theta) \right)\sin(\theta) \\ & = & 2\cos^{3}(\theta)- \cos(\theta) - 2 \sin^2(\theta) \cos(\theta) \\ \end{array}$
Finally, we exchange $$\sin^{2}(\theta)$$ for $$1 - \cos^{2}(\theta)$$ courtesy of the Pythagorean Identity, and get
$\begin{array}{rcl} \cos(3\theta) & = & 2\cos^{3}(\theta)- \cos(\theta) - 2 \sin^2(\theta) \cos(\theta) \\ & = & 2\cos^{3}(\theta)- \cos(\theta) - 2 \left(1 - \cos^{2}(\theta)\right) \cos(\theta) \\ & = & 2\cos^{3}(\theta)- \cos(\theta) - 2\cos(\theta) + 2\cos^{3}(\theta) \\ & = & 4\cos^{3}(\theta)- 3\cos(\theta) \\ \end{array}$
and we are done.
In the last problem in Example \ref{doubleangleex}, we saw how we could rewrite $$\cos(3\theta)$$ as sums of powers of $$\cos(\theta)$$. In Calculus, we have occasion to do the reverse; that is, reduce the power of cosine and sine. Solving the identity $$\cos(2\theta) = 2\cos^{2}(\theta) -1$$ for $$\cos^{2}(\theta)$$ and the identity $$\cos(2\theta) = 1 - 2\sin^{2}(\theta)$$ for $$\sin^{2}(\theta)$$ results in the aptly-named `Power Reduction' formulas below.
Power Reduction Formulas
For all angles $$\theta$$, \index{Power Reduction Formulas}
• $$\cos^{2}(\theta) = \dfrac{1 + \cos(2\theta)}{2}$$
• $$\sin^{2}(\theta) = \dfrac{1 - \cos(2\theta)}{2}$$
Example $$\PageIndex{1}$$:
Rewrite $$\sin^{2}(\theta) \cos^{2}(\theta)$$ as a sum and difference of cosines to the first power.
Solution
We begin with a straightforward application of Theorem \ref{powerreduction}
$\begin{array}{rcl} \sin^{2}(\theta) \cos^{2}(\theta) & = & \left( \dfrac{1 - \cos(2\theta)}{2} \right) \left( \dfrac{1 + \cos(2\theta)}{2} \right) \\ & = & \dfrac{1}{4}\left(1 - \cos^{2}(2\theta)\right) \\ & = & \dfrac{1}{4} - \dfrac{1}{4}\cos^{2}(2\theta) \\ \end{array}$
Next, we apply the power reduction formula to $$\cos^{2}(2\theta)$$ to finish the reduction
$\begin{array}{rcl} \sin^{2}(\theta) \cos^{2}(\theta) & = & \dfrac{1}{4} - \dfrac{1}{4}\cos^{2}(2\theta) \\ & = & \dfrac{1}{4} - \dfrac{1}{4} \left(\dfrac{1 + \cos(2(2\theta))}{2}\right) \\ & = & \dfrac{1}{4} - \dfrac{1}{8} - \dfrac{1}{8}\cos(4\theta) \\ & = & \dfrac{1}{8} - \dfrac{1}{8}\cos(4\theta) \\ \end{array}$
Another application of the Power Reduction Formulas is the Half Angle Formulas. To start, we apply the Power Reduction Formula to $$\cos^{2}\left(\frac{\theta}{2}\right) $\cos^{2}\left(\dfrac{\theta}{2}\right) = \dfrac{1 + \cos\left(2 \left(\frac{\theta}{2}\right)\right)}{2} = \dfrac{1 + \cos(\theta)}{2}.$ We can obtain a formula for \(\cos\left(\frac{\theta}{2}\right)$$ by extracting square roots. In a similar fashion, we may obtain a half angle formula for sine, and by using a quotient formula, obtain a half angle formula for tangent. We summarize these formulas below.
Half Angle Formulas
For all applicable angles $$\theta$$:
• $$\cos\left(\dfrac{\theta}{2}\right) = \pm \sqrt{\dfrac{1 + \cos(\theta)}{2}}$$
• $$\sin\left(\dfrac{\theta}{2}\right) = \pm \sqrt{\dfrac{1 - \cos(\theta)}{2}}$$
• $$\tan\left(\dfrac{\theta}{2}\right) = \pm \sqrt{\dfrac{1 - \cos(\theta)}{1+\cos(\theta)}}$$
where the choice of $$\pm$$ depends on the quadrant in which the terminal side of $$\dfrac{\theta}{2}$$ lies.
Example $$\PageIndex{1}$$:
1. Use a half angle formula to find the exact value of $$\cos\left(15^{\circ}\right)$$.
2. Suppose $$-\pi \leq \theta \leq 0$$ with $$\cos(\theta) = -\frac{3}{5}$$. Find $$\sin\left(\frac{\theta}{2}\right)$$.
3. Use the identity given in number \ref{doubleanglesinewtan} of Example \ref{doubleangleex} to derive the identity $\tan\left(\dfrac{\theta}{2}\right) = \dfrac{\sin(\theta)}{1+\cos(\theta)}$
Solution
1. To use the half angle formula, we note that $$15^{\circ} = \frac{30^{\circ}}{2}$$ and since $$15^{\circ}$$ is a Quadrant I angle, its cosine is positive. Thus we have
$\begin{array}{rcl} \cos\left(15^{\circ}\right) & = & + \sqrt{\dfrac{1+\cos\left(30^{\circ}\right)}{2}} = \sqrt{\dfrac{1+\frac{\sqrt{3}}{2}}{2}}\\ & = & \sqrt{\dfrac{1+\frac{\sqrt{3}}{2}}{2}\cdot \dfrac{2}{2}} = \sqrt{\dfrac{2+\sqrt{3}}{4}} = \dfrac{\sqrt{2+\sqrt{3}}}{2}\\ \end{array}$
Back in Example \ref{cosinesumdiffex}, we found $$\cos\left(15^{\circ}\right)$$ by using the difference formula for cosine. In that case, we determined $$\cos\left(15^{\circ}\right) = \frac{\sqrt{6}+ \sqrt{2}}{4}$$. The reader is encouraged to prove that these two expressions are equal.
1. If $$-\pi \leq \theta \leq 0$$, then $$-\frac{\pi}{2} \leq \frac{\theta}{2} \leq 0$$, which means $$\sin\left(\frac{\theta}{2}\right) < 0$$. Theorem \ref{halfangle} gives
$\begin{array}{rcl} \sin\left(\dfrac{\theta}{2} \right) & = & -\sqrt{\dfrac{1-\cos\left(\theta \right)}{2}} = -\sqrt{\dfrac{1- \left(-\frac{3}{5}\right)}{2}}\\ & = & -\sqrt{\dfrac{1 + \frac{3}{5}}{2} \cdot \dfrac{5}{5}} = -\sqrt{\dfrac{8}{10}} = -\dfrac{2\sqrt{5}}{5}\\ \end{array}$
1. Instead of our usual approach to verifying identities, namely starting with one side of the equation and trying to transform it into the other, we will start with the identity we proved in number \ref{doubleanglesinewtan} of Example \ref{doubleangleex} and manipulate it into the identity we are asked to prove. The identity we are asked to start with is $$\; \sin(2\theta) = \frac{2\tan(\theta)}{1 + \tan^{2}(\theta)}$$. If we are to use this to derive an identity for $$\tan\left(\frac{\theta}{2}\right)$$, it seems reasonable to proceed by replacing each occurrence of $$\theta$$ with $$\frac{\theta}{2} $\begin{array}{rcl} \sin\left(2 \left(\frac{\theta}{2}\right)\right) & = & \dfrac{2\tan\left(\frac{\theta}{2}\right)}{1 + \tan^{2}\left(\frac{\theta}{2}\right)} \\ \sin(\theta) & = & \dfrac{2\tan\left(\frac{\theta}{2}\right)}{1 + \tan^{2}\left(\frac{\theta}{2}\right)} \\ \end{array}$ We now have the \(\sin(\theta)$$ we need, but we somehow need to get a factor of $$1+\cos(\theta)$$ involved. To get cosines involved, recall that $$1 + \tan^{2}\left(\frac{\theta}{2}\right) = \sec^{2}\left(\frac{\theta}{2}\right)$$. We continue to manipulate our given identity by converting secants to cosines and using a power reduction formula
$\begin{array}{rcl} \sin(\theta) & = & \dfrac{2\tan\left(\frac{\theta}{2}\right)}{1 + \tan^{2}\left(\frac{\theta}{2}\right)} \\ \sin(\theta) & = & \dfrac{2\tan\left(\frac{\theta}{2}\right)}{\sec^{2}\left(\frac{\theta}{2}\right)} \\ \sin(\theta) & = & 2 \tan\left(\frac{\theta}{2}\right) \cos^{2}\left(\frac{\theta}{2}\right) \\ \sin(\theta) & = & 2 \tan\left(\frac{\theta}{2}\right) \left(\dfrac{1 + \cos\left(2 \left(\frac{\theta}{2}\right)\right)}{2}\right) \\ \sin(\theta) & = & \tan\left(\frac{\theta}{2}\right) \left(1+\cos(\theta) \right) \\ \tan\left(\dfrac{\theta}{2}\right) & = & \dfrac{\sin(\theta)}{1+\cos(\theta)} \\ \end{array}$
Our next batch of identities, the Product to Sum Formulas,\footnote{These are also known as the Prosthaphaeresis Formulas and have a rich history. The authors recommend that you conduct some research on them as your schedule allows.} are easily verified by expanding each of the right hand sides in accordance with Theorem \ref{circularsumdifference} and as you should expect by now we leave the details as exercises. They are of particular use in Calculus, and we list them here for reference.
Note: Product to Sum Formulas
For all angles $$\alpha$$ and $$\beta$$, \index{Product to Sum Formulas}
• $$\cos(\alpha)\cos(\beta) = \frac{1}{2} \left[ \cos(\alpha - \beta) + \cos(\alpha + \beta)\right]$$
• $$\sin(\alpha)\sin(\beta) = \frac{1}{2} \left[ \cos(\alpha - \beta) - \cos(\alpha + \beta)\right]$$
• $$\sin(\alpha)\cos(\beta) = \frac{1}{2} \left[ \sin(\alpha - \beta) + \sin(\alpha + \beta)\right]$$
Related to the Product to Sum Formulas are the Sum to Product Formulas, which we will have need of in Section \ref{TrigEquIneq}. These are easily verified using the Product to Sum Formulas, and as such, their proofs are left as exercises.
Note: Sum to Product Formulas:
For all angles $$\alpha$$ and $$\beta$$:
1. $$\cos(\alpha) + \cos(\beta) = 2 \cos\left( \dfrac{\alpha + \beta}{2}\right)\cos\left( \dfrac{\alpha - \beta}{2}\right)$$
2. $$\cos(\alpha) - \cos(\beta) = - 2 \sin\left( \dfrac{\alpha + \beta}{2}\right)\sin\left( \dfrac{\alpha - \beta}{2}\right)$$
3. $$\sin(\alpha) \pm \sin(\beta) = 2 \sin\left( \dfrac{\alpha \pm \beta}{2}\right)\cos\left( \dfrac{\alpha \mp \beta}{2}\right)$$
Example $$\PageIndex{1}$$:
1. Write $$\; \cos(2\theta)\cos(6\theta) \;$$ as a sum.
2. \Write $$\; \sin(\theta) - \sin(3\theta) \;$$ as a product.
Solution
1. Identifying $$\alpha = 2\theta$$ and $$\beta = 6\theta$$, we find
$\begin{array}{rcl} \cos(2\theta)\cos(6\theta) & = & \frac{1}{2} \left[ \cos(2\theta - 6\theta) + \cos(2\theta + 6\theta)\right]\\ & = & \frac{1}{2} \cos(-4\theta) + \frac{1}{2}\cos(8\theta) \\ & = & \frac{1}{2} \cos(4\theta) + \frac{1}{2} \cos(8\theta), \end{array}$
where the last equality is courtesy of the even identity for cosine, $$\cos(-4\theta) = \cos(4\theta)$$.
1. Identifying $$\alpha = \theta$$ and $$\beta = 3\theta$$ yields
$\begin{array}{rcl} \sin(\theta) - \sin(3\theta) & = & 2 \sin\left( \dfrac{\theta - 3\theta}{2}\right)\cos\left( \dfrac{\theta + 3\theta}{2}\right) \\ & = & 2 \sin\left( -\theta \right)\cos\left( 2\theta \right) \\ & = & -2 \sin\left( \theta \right)\cos\left( 2\theta \right), \\ \end{array}$
where the last equality is courtesy of the odd identity for sine, $$\sin(-\theta) = -\sin(\theta)$$.
The reader is reminded that all of the identities presented in this section which regard the circular functions as functions of angles (in radian measure) apply equally well to the circular (trigonometric) functions regarded as functions of real numbers. In Exercises \ref{idengraphfirst} - \ref{idengraphlast} in Section \ref{TrigGraphs}, we see how some of these identities manifest themselves geometrically as we study the graphs of the these functions. In the upcoming Exercises, however, you need to do all of your work analytically without graphs.
### Contributors
• Carl Stitz, Ph.D. (Lakeland Community College) and Jeff Zeager, Ph.D. (Lorain County Community College)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9800750613212585, "perplexity": 244.89768183798046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591216.51/warc/CC-MAIN-20180719183926-20180719203926-00115.warc.gz"}
|
http://codereview.stackexchange.com/questions/20896/readability-concerns-with-literal-dictionary-lookup-vs-if-else
|
# Readability concerns with literal dictionary lookup vs if-else
messages_dict = {'error':errors, 'warning':warnings}[severity]
messages_dict[field_key] = message
and I'm to use this instead:
if severity == 'error':
messages_dict = errors
elif sevierty == 'warning':
messages_dict = warnings
else:
raise ValueError('Incorrect severity value')
messages_dict[field_key] = message
But it looks too verbose for such a simple thing.
I don't care too much if it's more efficient than constructing a dictionary for just two mappings. Readability and maintainability is my biggest concern here (errors and warnings are method arguments, so I cannot build the lookup dictionary beforehand and reuse it later).
-
I don't see the readability concern with the first case. – Lattyware Jan 25 '13 at 14:43
I thought that it might puzzle an unaware reader... – fortran Jan 25 '13 at 14:52
Unaware of what? Dictionaries? They are a core Python data structure - if someone can't read that, they don't know Python. You can't write code that is readable for someone who have no idea. – Lattyware Jan 25 '13 at 15:08
I meant unaware of the 'pattern' (using a dict to choose between two options). I've seen complaints in peer reviews about using boolean operators in a similar way (like x = something or default, maybe a little bit more complex) because it was obscure :-/ – fortran Jan 25 '13 at 17:21
That's a different situation, as it's relatively unclear what is going on (and is better replaced by the ternary operator). This isn't particularly a pattern, it's just a use of dictionaries. – Lattyware Jan 26 '13 at 14:14
Looking into the future can certainly be tricky, but you need to also think how much will the code need to be change to assert a specific new need. If you need to add a new level of reporting, would you prefer to add a new else if or just a value in a dictionary?
Readability then starts to backfire, since if ... else is quite common. What happens when you arrive at 3, 4 or 5 branches?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3376534879207611, "perplexity": 1905.0553926655102}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430451423003.11/warc/CC-MAIN-20150501033703-00028-ip-10-235-10-82.ec2.internal.warc.gz"}
|
http://openstudy.com/updates/500eb794e4b0ed432e10f5e7
|
## Got Homework?
### Connect with other students for help. It's a free community.
• across
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
55 members online
• 0 viewing
## rebeccaskell94 Group Title In △RST, what is the length of line segment RT? How old I'll draw it. Possible answers: 8 16 radical 2 32 16radical 3 2 years ago 2 years ago Edit Question Delete Cancel Submit
• This Question is Closed
Best Response
You've already chosen the best response.
1
|dw:1343141784834:dw|
• 2 years ago
2. ParthKohli
Best Response
You've already chosen the best response.
2
All right. You do observe that this is a 45-45-90 which has a leg 16 right?
• 2 years ago
Best Response
You've already chosen the best response.
1
yes.
• 2 years ago
4. ParthKohli
Best Response
You've already chosen the best response.
2
Okay. Another thing that I should tell you—if two angles are equal, then two sides are equal too! Do you know what a triangle with two equal sides is known as?
• 2 years ago
Best Response
You've already chosen the best response.
1
Right triangle?
• 2 years ago
6. ParthKohli
Best Response
You've already chosen the best response.
2
Hmm. No...
• 2 years ago
7. ParthKohli
Best Response
You've already chosen the best response.
2
There are names given to the triangles based on their *sides*. Equilateral triangle - all sides equal. Isosceles triangle - two sides equal. Scalene triangle - no sides equal.
• 2 years ago
Best Response
You've already chosen the best response.
1
oh lol xD okay so it's isosceles sorry D:
• 2 years ago
9. ParthKohli
Best Response
You've already chosen the best response.
2
All right, so an isosceles triangle has 2 angles and 2 sides equal. So, we have TWO angles equal. This must be an isosceles triangle(which makes the two legs equal by default).
• 2 years ago
Best Response
You've already chosen the best response.
1
So ST is 16 as well?
• 2 years ago
11. ParthKohli
Best Response
You've already chosen the best response.
2
|dw:1343142127177:dw|
• 2 years ago
12. ParthKohli
Best Response
You've already chosen the best response.
2
Yes! You got it! Now can you use the Pythagorean Theorem? :)
• 2 years ago
Best Response
You've already chosen the best response.
1
16^2 + 16^2 = c^2? C in this case being RT?
• 2 years ago
14. ParthKohli
Best Response
You've already chosen the best response.
2
Yep!
• 2 years ago
Best Response
You've already chosen the best response.
1
256 or √16?
• 2 years ago
Best Response
You've already chosen the best response.
1
I don't think I did that right...
• 2 years ago
17. ParthKohli
Best Response
You've already chosen the best response.
2
Umm...no.....
• 2 years ago
18. ParthKohli
Best Response
You've already chosen the best response.
2
What is $$16^2$$?
• 2 years ago
Best Response
You've already chosen the best response.
1
oh duhhrr give me a sec
• 2 years ago
Best Response
You've already chosen the best response.
1
512=c^2 or c=22.6?
• 2 years ago
21. ParthKohli
Best Response
You've already chosen the best response.
2
• 2 years ago
Best Response
You've already chosen the best response.
1
-___- √512 ? maybe?
• 2 years ago
23. ParthKohli
Best Response
You've already chosen the best response.
2
Yep, but you have to simplify that radical.
• 2 years ago
Best Response
You've already chosen the best response.
1
Or! 16√2 ?
• 2 years ago
25. ParthKohli
Best Response
You've already chosen the best response.
2
Yay! As you are done with the long method now, I'd give you a short-cut to do such questions involving a 45-45-90 triangle!
• 2 years ago
Best Response
You've already chosen the best response.
1
Okay :D
• 2 years ago
27. ParthKohli
Best Response
You've already chosen the best response.
2
If you are given a leg of a 45-45-90 triangle, you just put $$\sqrt2$$ in front of it to get the hypotenuse ^_^ • If the leg is $$8$$, then the hypotenuse is $$8\sqrt2$$ • If the leg is $$1281283283283249483483$$, then the hypotenuse is $$1281283283283249483483 \sqrt2$$
• 2 years ago
Best Response
You've already chosen the best response.
1
D: That is genius
• 2 years ago
29. ParthKohli
Best Response
You've already chosen the best response.
2
And, for example, if the hypotenuse is $$\pi\sqrt2$$, then the leg is $$\pi$$. Remember, you just remove that $$\sqrt2$$ in the case of $$hypotenuse \Longrightarrow leg$$.
• 2 years ago
Best Response
You've already chosen the best response.
1
*takes notes*
• 2 years ago
31. ParthKohli
Best Response
You've already chosen the best response.
2
But remember, this method is $$\textbf{only for the 45-45-90 triangles}$$.
• 2 years ago
Best Response
You've already chosen the best response.
1
Okay!
• 2 years ago
33. ParthKohli
Best Response
You've already chosen the best response.
2
Wanna know it for 30-60-90 too?
• 2 years ago
Best Response
You've already chosen the best response.
1
Yeah! But I have to go make lunch for my siblings :( I'll be back in like 10-15 minutes :D
• 2 years ago
35. ParthKohli
Best Response
You've already chosen the best response.
2
All right! :)
• 2 years ago
• Attachments:
## See more questions >>>
##### spraguer (Moderator) 5→ View Detailed Profile
23
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9998399019241333, "perplexity": 9045.98269306549}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637900032.4/warc/CC-MAIN-20141030025820-00179-ip-10-16-133-185.ec2.internal.warc.gz"}
|
https://akjournals.com/search?pageSize=10&q=%22law+of+large+numbers%22&sort=relevance
|
# Search Results
## You are looking at 1 - 10 of 35 items for :
• "law of large numbers"
• All content
Clear All
# Strong laws of large numbers in von Neumann algebras
Acta Mathematica Hungarica
Author: Katarzyna Klimczak
References [1] Batty , C. J. K. 1979 The strong law of large numbers for states and traces of a W*-algebra Z. Wahrsch. Verw
Restricted access
# On the strong law of large numbers for ϕ-mixing and ρ-mixing random variables
Acta Mathematica Hungarica
Author: Anna Kuczmaszewska
. Probab. Lett. 79 105 – 111 10.1016/j.spl.2008.07.026 . [3] Fazekas , I. , Klesov , O. 2000 A general approach to the strong laws of large numbers Teor. Verojatnost. i Primenen. 45 569
Restricted access
# Fulfilment of the law of large numbers in case of variance determinations
Acta Geodaetica et Geophysica Hungarica
Authors: B. Hajagos and Ferenc Steiner
As the variance (the square of the minimum L 2-norm, i.e., the square of the scatter) is one of the basic characteristics of the conventional statistics, it is of practical importance to know the errors of its determination for different parent distribution types. This statement is outstandingly valid for the geostatistics because the (h) variogram (called also as semi-variogram) is defined as the half variance of some quantity-difference (e.g. difference of ore concentrations) in function of the h dis- tance of the measuring points and this g (h)-curve plays a basic role in the classical geostatistics. If the scatter (s VAR) is chosen to characterize the determination uncertainties of the variance (denoted the latter by VAR), this can be easily calculate as the quotient A VAR= Ön (if the number n of the elements in the sample is large enough); for the so-called asymptotic scatter A VAR is known a simple formula (containing the fourth moment). The present paper shows that the AVAR has finite value unfortunately only for about a quarter of distribution types occurring in the earth sciences, it must be especially accentuate that A VAR has infinite value for that distribution type which most frequent occurs in the geostatistics. It is proven by the present paper that the law of large numbers is always fulfilled (i.e., the error always decreases if n increases) for the error-determinations if the semi-intersextile range is accepted (instead of the scatter); the single (quite natural) condition is the existence of the theoretical variance for the parent distribution. __
Restricted access
# On the strong laws of large numbers for double arrays of random variables in convex combination spaces
Acta Mathematica Hungarica
Authors: Nguyen Van Quang and Nguyen Tran Thuan
On the strong law of large numbers for pairwise independent random variables Acta Math. Hungar. 42 319 – 330 10.1007/BF01956779 . [3
Restricted access
# Strong laws of large numbers for random forests
Acta Mathematica Hungarica
Authors: A. Chuprunov and I. Fazekas
## Abstract
Random forests are studied. A moment inequality and a strong law of large numbers are obtained for the number of trees having a fixed number of nonroot vertices.
Restricted access
# Inequalities and strong laws of large numbers for random allocations
Acta Mathematica Hungarica
Authors: Alexey Chuprunov and István Fazekas
Moment inqualities and strong laws of large numbers are proved for random allocations of balls into boxes. Random broken lines and random step lines are constructed using partial sums of i.i.d. random variables that are modified by random allocations. Functional limit theorems for such random processes are obtained.
Restricted access
# Laws of large numbers for cooperative St. Petersburg gamblers
Periodica Mathematica Hungarica
Authors: Sándor Csörgő and Gordon Simons
Summary General linear combinations of independent winnings in generalized \St~Petersburg games are interpreted as individual gains that result from pooling strategies of different cooperative players. A weak law of large numbers is proved for all such combinations, along with some almost sure results for the smallest and largest accumulation points, and a considerable body of earlier literature is fitted into this cooperative framework. Corresponding weak laws are also established, both conditionally and unconditionally, for random pooling strategies.
Restricted access
# On the strong law of large numbers and additive functions
Periodica Mathematica Hungarica
Authors: István Berkes, Wolfgang Müller, and Michel Weber
## Abstract
Let f(n) be a strongly additive complex-valued arithmetic function. Under mild conditions on f, we prove the following weighted strong law of large numbers: if X,X 1,X 2, … is any sequence of integrable i.i.d. random variables, then
\documentclass{aastex} \usepackage{amsbsy} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{bm} \usepackage{mathrsfs} \usepackage{pifont} \usepackage{stmaryrd} \usepackage{textcomp} \usepackage{upgreek} \usepackage{portland,xspace} \usepackage{amsmath,amsxtra} \pagestyle{empty} \DeclareMathSizes{10}{9}{7}{6} \begin{document} $$\mathop {\lim }\limits_{N \to \infty } \frac{{\sum\nolimits_{n = 1}^N {f(n)X_n } }} {{\sum\nolimits_{n = 1}^N {f(n)} }} = \mathbb{E}Xa.s.$$ \end{document}
Restricted access
# Weak laws of large numbers for cooperative gamblers
Periodica Mathematica Hungarica
Authors: Sándor Csörgő and Gordon Simons
## Abstract
Based on a stochastic extension of Karamata’s theory of slowly varying functions, necessary and sufficient conditions are established for weak laws of large numbers for arbitrary linear combinations of independent and identically distributed nonnegative random variables. The class of applicable distributions, herein described, extends beyond that for sample means, but even for sample means our theory offers new results concerning the characterization of explicit norming sequences. The general form of the latter characterization for linear combinations also yields a surprising new result in the theory of slow variation.
Restricted access
# One-sided strong laws forincrements of sumsof i.i.d. random variables
Studia Scientiarum Mathematicarum Hungarica
Author: A. N. Frolov
ERDŐS, P. and RÉNYI, A., On a new law of large numbers, J. Analyse Math. 23 (1970), 103-111. MR 42 # 6907 On a new law of large numbers J. Analyse Math
Restricted access
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7612602710723877, "perplexity": 1763.5785948506684}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153931.11/warc/CC-MAIN-20210730025356-20210730055356-00216.warc.gz"}
|
http://openstudy.com/updates/4e2d925e0b8b3d38d3bab954
|
## Got Homework?
### Connect with other students for help. It's a free community.
• across
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
55 members online
• 0 viewing
## gfoste00 Group Title HELP. Assume g^-1(10) is a one to one function. If g(x) = x^2+10x with x greater than or equal to -5, find g^-1 (10). 3 years ago 3 years ago Edit Question Delete Cancel Submit
• This Question is Closed
1. abtrehearn Group Title
Best Response
You've already chosen the best response.
0
If we solve the equation$x = y^{2} + 10y$for y, we get the inverse of g. Completing the square:$y^{2} + 10y + 25 = x + 25$$y + 5 = \sqrt{x + 25}$$y = g^{-1}(x) = -5 + \sqrt{x + 25}$$g^{-1}(10) = -5 + \sqrt{35}\approx.916$
• 3 years ago
• Attachments:
## See more questions >>>
##### spraguer (Moderator) 5→ View Detailed Profile
23
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9808996319770813, "perplexity": 13733.617827405262}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510266597.23/warc/CC-MAIN-20140728011746-00027-ip-10-146-231-18.ec2.internal.warc.gz"}
|
https://tex.stackexchange.com/questions/208743/how-to-redefine-that-is-backslash-space
|
# How to redefine “\ ” (that is “<backslash> <space>”)?
In (La)TeX, is there a way to redefine the command \ (that is "<backslash> <space>")?
I have lots of texts where \ is used as non-breakable space after "." characters like in Dr.\ or Mr.\ and I want to make that space in the output just a bit shorter than the normal space.
Please note, I do not want to change the texts, but rather redefine the \ macro in the preamble. Is this possible?
• The \ macro inserts ordinary (breakable) space, not non-breakable space. For non-breakable space, you should use (and suitably modify, as necessary) the ~ symbol. – Mico Oct 24 '14 at 5:59
I would strongly recommend against this, but it can be done. The command \ is a primitive meaning 'a normal space' so shows up in various places, in particular the definition of \nonbreakspace. Thus a 'safe' redefinition of \ must at least deal with that:
\documentclass{article}
\let\hardspace\ %
\DeclareRobustCommand*\nobreakspace{\leavevmode\nobreak\hardspace}
%\let\ ~
\begin{document}
Some text to show that this is now a non-breaking space in a demo:
Mr.\ Black.
\let\ ~
Some text to show that this is now a non-breaking space in a demo:
Mr.\ Black.
\end{document}
I've commented out \let\ ~ in the preamble in the above so that the demo shows the effect of the change, but in a real case you'd apply it to everything. As pointed out by others, you really should use the correct mark-up to differentiate between a 'forced' normal space and a non-breaking space.
• As the poster I have to explain what the reason for my question was: I write my papers in Markdown and convert them with pandoc via LaTeX to pdf or to html. Markdown has basckslash-space as non-breakable space and I though that the same is in TeX, but of course not: So pandoc converts the Markdown-backslash-space into TeX-tilde, so if anything I redefine tilde in LaTeX as a little narrower space. Done that and it works great. But your answer is of course perfectly valid for the backslash-space redefinition. – halloleo Nov 1 '14 at 1:24
For example, starting from the definition of ~:
\makeatletter
\def\hallospace{\penalty\@M \kern0.3em} % say, 0.3em
\let\oldspace=\ %
\let\ =\hallospace
\makeatother
(And remember to use ~ as an unbreakable space in the future).
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9701371788978577, "perplexity": 2435.2919328762005}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703565376.63/warc/CC-MAIN-20210125061144-20210125091144-00255.warc.gz"}
|
https://mathoverflow.net/questions/34487/what-are-the-most-important-results-and-papers-in-complexity-theory-that-every/34638
|
# What are the most important results (and papers) in complexity theory that every one should know?
A few years ago Lance Fortnow listed his favorite theorems in complexity theory: (1965-1974) (1975-1984) (1985-1994) (1995-2004) But he restricted himself (check the third one) and his last post is now 6 years old. An updated and more comprehensive list can be helpful.
What are the most important results (and papers) in complexity theory that every one should know? What are your favorites?
-
The list of important results in complexity theory that every complexity theorist "should" know is enormous. I think a better question would be: "what are the most important results in complexity theory that every mathematician should know?" – Ryan Williams Aug 4 '10 at 14:14
How do you expect answers to this question to be different from what Lance did? Do you want also things not on his lists? Do you want extra 'votes' for things already mentioned by him? (I find his lists pretty comprehensive: his favorites ~ results complexity theorists should know) – Mitch Harris Aug 4 '10 at 14:22
@Ryan: That is also a nice question. (Maybe we should start a community wiki for it for that also?) But I was more thinking about having a list of things that a first year graduate student who is going to work in complexity theory should learn (or know). – Kaveh Aug 4 '10 at 16:33
@Mitch: Repeating the results from Lance is OK, but I would like to have other perspective and results not mentioned by him, i.e. a more comprehensive list. His lists does not have anything from last 6 years. – Kaveh Aug 4 '10 at 16:33
I think Lance's choices from the past are pretty comprehensive, although I might add a couple more from the lower bounds department which for some reason are not well-known:
John E. Hopcroft, Wolfgang J. Paul, Leslie G. Valiant: On Time Versus Space. J. ACM 24(2): 332-337 (1977)
Wolfgang J. Paul, Nicholas Pippenger, Endre Szemerédi, William T. Trotter: On Determinism versus Non-Determinism and Related Problems (Preliminary Version) FOCS 1983: 429-438
The first paper shows that $TIME[t] \subseteq SPACE[t/\log t]$ (so, $SPACE[t]$ is not contained in $TIME[o(t \log t)]$). This result has since been generalized (from Turing machines) to all the "modern" models of computation. (For references, look at citations on Google scholar.)
The second paper shows that for multitape Turing machines, $NTIME[n] \neq TIME[n]$. This is really the only generic separation of nondeterministic and deterministic time that we know. It is not known whether this result extends to more modern models of computation. Perhaps one reason why these results are not better known is that many seem to believe that their approaches are a dead end, more or less. (There's some mathematical evidence for that: the techniques do break down if you try to push them any further, but it's always possible these techniques could be combined with something new.)
As for the last 6 years... I'll have to think about my choices for the "best papers" since then. Expect an update to this answer later. I think the following work over the last six years should be among those that everyone should know about. That doesn't mean that I think they're "best", it just means I am trying to answer the original question. It's a very biased list.
• Irit Dinur's combinatorial proof of the PCP theorem
• Omer Reingold's logspace algorithm for st-connectivity
• Ketan Mulmuley's geometric complexity theory program
• Subhash Khot's Unique Games Conjecture and what it entails (this was initiated earlier than 6 years ago but it has become much more important in the last 6 years)
• Russell Impagliazzo and Valentine Kabanets' "Derandomizing polynomial identity testing means proving circuit lower bounds"
• Lance Fortnow et al.'s time-space lower bounds for SAT (this is excluding all work that I have personally done on this, you can decide for yourself if you should know about that)
I left out a bunch of very important things because the list is 6 items. Sorry.
-
My favourite results are (1) the existence of NP-complete problems (Cook), (2) the Baker-Gil-Solovay theorem that whether P=NP holds relativized to on oracle depends on the oracle, and (3) Fagin's characterization of NP in terms of second order logic.
I am not so much interested in the large number of proofs that show that a certain problem is NP-complete, but the fact that there is some problem that is NP-complete is remarkable and important. And Cook's SAT is actually natural. (2) shows that several approaches will not work when one wants to settle P versus NP. (3) gives a much more natural definition of the class NP. Fagin's formulation (NP is the class of graph properties (of finite graphs) that can be expressed with a formula that has an n-ary second order existential quantifier in front, followed by a first order formula) indicates that NP vs co-NP is a very fundamental question as well (can second order existential quantification be replaced by second order universal quantification?).
-
The mere fact that NP-complete problems exist is (or should be) obvious and immediate once one has the insight to consider the concept in the first place: the problem "Given a nondeterministic machine P, and a number N in unary, determine if it is possible for P to halt in N steps" is clearly NP-complete. The fact that so many other naturally arising problems turn out to be NP-complete is what makes it interesting. – Sridhar Ramesh Aug 9 '10 at 20:37
I think you should add as a recent result the proof for QIP=IP=PSPACE
-
Is there a particular reason you chose this result? – András Salamon Aug 5 '10 at 17:49
Well, I have a bias here to be honest but I propose this as a result for the 2005-2010 period. First, to my knowledge, this is the best relation we have between classical and quantum classes. There are other good results on upper bounds for BQP, but this is the only result where a quantum complexity class is completely characterized. Second, although I don't know the complete details, the proof seems to be non-relativizing. And that's important because we can try to learn from here and use it to proof other non-relativizing results. Although, other people already tried that. – Marcos Villagra Aug 5 '10 at 23:29
Two corrections: first, there were several previous results that completely characterized a quantum complexity class in terms of a classical class (for example, QRG=EXP, NQP=coC_{=}P, PostBQP=PP, and BQP_CTC=PSPACE). Second, while PSPACE in QIP is nonrelativizing, the "new" direction (QIP in PSPACE) is relativizing. – Scott Aaronson Aug 6 '10 at 1:31
Thanks for the info. But what I wanted to point out is that for the "lower" classical complexity classes (PSPACE and below) this is the best, is that correct? Although the NQP=coC_{=}P result seems to be at a really low level. – Marcos Villagra Aug 6 '10 at 2:22
Also BQPSPACE=PSPACE. – Robin Kothari Aug 7 '10 at 14:27
There's the Bazzi/Razborov/Braverman sequence on fooling AC0 circuits.
-
Well I guess after Cook, Karp's paper "Reducibility among combinatorial problems" is the second most obligatory and canonical thing to mention. This paper was the first to demonstrate to the world the diversity and ubiquity of NP-complete problems.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7496868968009949, "perplexity": 854.3496767091694}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936460576.24/warc/CC-MAIN-20150226074100-00144-ip-10-28-5-156.ec2.internal.warc.gz"}
|
https://astrobites.org/2011/03/28/supermassive-black-holes-and-galaxy-mergers/
|
# Supermassive Black Holes and Galaxy Mergers
• Paper title: Recoiling Black Holes in Merging Galaxies: Relationship to AGN Lifetimes, Starbursts, and the $M_{BH}-sigma_*$ Relation (see also here)
• Authors: Laura Blecha, Thomas J. Cox, Abraham Loeb, Lars Hernquist
• First author’s affiliation: Harvard-Smithsonian Center for Astrophysics
Many galaxies are believed to hold super-massive black holes (SMBHs) at their centers–for example, Sagittarius A* is believed to be an SMBH in the center of the Milky Way. When two SMBH-containing galaxies merge, the two SMBH’s are thought to slowly spiral into the center of the new merged galaxy as a result of dynamical friction with gas as well as gravitational interactions with nearby stars. Finally, the two SMBHs may merge into one, emitting an intense, anisotropic burst of gravitational radiation, which we eventually hope to detect with to-be-built instruments like LISA.
Due to momentum conservation, the newly merged black hole is “kicked” in the opposite direction of the gravitational wave burst. This kick can be very large, giving the SMBH a new velocity that can be up to about 4000 km/s, which far exceeds the escape velocity of the galaxy itself. This “recoil velocity” depends sensitively on the mass of the two merging black holes, as well as their spin magnitudes and relative orientations. In the case that the recoiling black hole does not receive a kick which exceeds the gravitational well of the galaxy, then the recoiled black hole will, at least on some timescale, be displaced from the center of the galaxy.
## Tracking the Black Holes in Simulated Galaxy Mergers
In today’s astrobite, this fascinating phenomenon is studied using a large suite of hydrodynamic simulations (run using the SPH code GADGET— look for an upcoming astrobite on tips for how to install GADGET for your own use!). The authors explore a wide range of galaxy merger types, as well as a range of kick velocities. Please see the figure for an example of black hole trajectories in one of these models, for different kick velocities.
Black-hole trajectories in a gas-poor merger model. The different colors denote different ratios of the kick velocity to the escape velocity.
First of all, the authors find that the black hole trajectories depend sensitively on the gas content of the merging galaxies. When the galaxies are first merging, if they are gas-rich, then this gas can shock and pile up at a central dense cusp, forming a steep potential well that makes it more difficult for the kicked black hole to escape. On the other hand, gas-poor galaxies have their mass tied up in stars, which simply flow collisionlessly past eachother and do not form as strong a potential well at the center of the galaxy. Higher gas content also allows for greater dynamical friction which further inhibits the black hole trajectory.
The authors also determine two physical situations which could allow observers to find a spatially or kinematically displaced SMBH. The first is when the black hole is kicked at near the escape velocity; in this case, the black hole retains a small accretion disk around it; this would appear as a slowly fading AGN in x-ray bands. The next is in an intermediate-kick case where the black hole periodically enters the gas-rich galactic center, picking up new material to accrete. More generally, the authors find that adding the physics of recoil kick to the black hole merger changes the lifetime of the AGN, whether by shortening it by removing it from the rich galactic-center environment, or sometimes lengthening it by giving it new material to accrete if the galactic center is gas-poor.
Finally, the authors find that black hole kicks can change the evolution of their host galaxies. For example, recoil kicks introduce more scatter into the $M_{BH}-sigma_*$ relation. This is due to the fact that, as noted above, the gravity-wave kick depends sensitively on aspects of the merger which are not directly captured by the $M_{BH}-sigma_*$ relation (e.g. the relative orientation of the black hole spins). Additionally, a displaced black hole can allow cold, dense gas in the center to remain undisturbed by AGN activity, leading to significant increases in star formation in the central region. Quoting the paper, “The simulations with recoil and without BHs have about twice the central [galactic] density of the stationary-BH case, which corresponds to a 3% increase in total stellar mass.”
In summary, today’s astrobite authors find that general relativity’s prediction of large black hole merger kicks adds another rich chapter in the complex (and poorly-understood) story of how supermassive black holes and their host galaxies co-evolve.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 3, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6884915828704834, "perplexity": 1392.3752608285472}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251694176.67/warc/CC-MAIN-20200127020458-20200127050458-00530.warc.gz"}
|
https://www.physicsforums.com/threads/proof-that-a-vector-space-w-is-the-direct-sum-of-ker-l-and-im-l.582462/
|
Proof that a vector space W is the direct sum of Ker L and Im L
1. Feb 29, 2012
nilwill
Hi there. I'm a long time reader, first time poster. I'm an undergraduate in Math and Economics and I am having trouble in Linear Algebra. This is the first class I have had that focuses solely on proofs, so I am in new territory.
1. The problem statement, all variables and given/known data
note Although the question doesn't state it, I think P is supposed to be a projection.
Let W be a vector space. Let P:W→W be a linear map s.t. P2=P.
Show that W= KerP + ImP and KerP$\cap$ImP
namely, W is the direct sum of KerP and ImP
Hint: To show W is the sum, write an element of W in the form w=w-P(w)+P(w)
2. Relevant equations
I am unsure of any relevant equations.
3. The attempt at a solution
KerP={z|P(z)=0}
ImP={P(v)|v$\in$W}
I am kind of fuzzy on the meaning of P2=P, and this is where I am stuck.
Would an example be something like:
P=$\left( \begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array} \right)$
P2=$\left( \begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array} \right) \times \left( \begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array} \right) = \left( \begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array} \right)$ =P?
From this example: for KerP=P(z)=0 , we would need to satisfy z+0=0, so z=0
For some v$\in$W, the ImP is P(v)=v.
KerP$\cap$ImP=0→ P(v)=P(z) iff v=0 so z=v where the two intersect.
I am unsure where to go from here, or if I'm even doing this correctly. If someone could nudge me along towards and answer without giving the me the full proof, it would be much appreciated. I've been stuck on this problem for 4 hours over two days and I can't seem to figure it out.
2. Feb 29, 2012
Deveno
P2 = P is sometimes used as the definition of a projection.
we can always write w = P(w) + (w - P(w))
no matter what w is.
P(w) is clearly in Im(P), so all you have to do is show that
w - P(w) is in ker(P).
what is P(w - P(w))?
3. Feb 29, 2012
nilwill
P:V→V and P2=P
Let me see if I can define this better: so if V is the direct sum of U + W
let P(v)=u
let Im P be a complete subset of U; if u is an element of U P(u)=u, so Im P =U
let Ker P be a complete subset of W; if v is an element of V and v is an element of W, the v=0+w for some element w in W, so Ker P =W
let v=P(v)+(v-P(v))
Since P(v) is in the Im P, then (v-P(v)) is in the Ker P
So P(v-P(v))=P(v)-P2(v)=P(v)-P(u)=u-u=0
therefore, v-P(v) is in the Ker P
Is this correct so far? Sorry for not using latex. I have only used it in creating this post and was stretched for time.
I think I can do the rest if I have set this up correctly.
4. Feb 29, 2012
Deveno
we're not worried about "direct sum" just yet, just the sum (or "join").
none of this makes much sense. and it's unnecessary.
you don't need to "let" v = P(v) + (v-P(v)) it ALWAYS is, by the rules of vector addition.
you don't KNOW this yet, you're trying to prove it. proofs start with what you know. in this case, what you know is:
v = P(v) + (v-P(v)) (simple algebra, always true, since P(v) + -(P(v)) is the 0-vector) and:
P(v) is in Im(P) (by the definition of what Im(P) is Im(P)= {u in W: u = P(v) for some v in W}.) and:
P2 = P (so P(P(v)) = P(v), you'll use this later).
why bring "u" into this?
in my last post, i asked you a question:
what is P(w - P(w))? you don't need any "extra letters" to answer this, you just need one certain fact about P.
Similar Discussions: Proof that a vector space W is the direct sum of Ker L and Im L
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9206842184066772, "perplexity": 1096.1495384887871}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934804666.54/warc/CC-MAIN-20171118055757-20171118075757-00560.warc.gz"}
|
https://core.ac.uk/display/2181680
|
Location of Repository
## Deformation of $\ell$-adic sheaves with Undeformed Local Monodromy
### Abstract
Let $X$ be a smooth connected algebraic curve over an algebraically closed field $k$. We study the deformation of $\ell$-adic Galois representations of the function field of $X$ while keeping the local Galois representations at all places undeformed.Comment: 21 pages. To appear in Journal of Number Theor
Topics: Mathematics - Algebraic Geometry, Mathematics - Number Theory, 14D15
Year: 2012
OAI identifier: oai:arXiv.org:1103.1093
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9007775783538818, "perplexity": 705.0475818277414}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828507.57/warc/CC-MAIN-20181217113255-20181217135255-00125.warc.gz"}
|
https://www.gamedev.net/forums/topic/208804-how-would-you-convert-this-to-c-from-c/
|
Archived
This topic is now archived and is closed to further replies.
How would you convert this to C++ from C?
This topic is 5295 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
Recommended Posts
if ( SDL_Init(SDL_INIT_AUDIO|SDL_INIT_VIDEO) < 0 )
{
printf("Unable to init SDL: %s\n", SDL_GetError());
exit(1);
}
I was thinking it would be like
if ( SDL_Init(SDL_INIT_AUDIO|SDL_INIT_VIDEO) < 0 )
{
cout << "cannot initialize" << SDL_GetError()
}
but I get errors, please help.Thanks Learning C++ was hard. Trying to relearn C++ sucks. Hopefully I remember everything and learn it right this time. [edited by - Thrust on February 20, 2004 11:05:27 PM] [edited by - Thrust on February 20, 2004 11:08:54 PM]
Share on other sites
What are the errors?
Share on other sites
Check the semicolon on the cout line
Share on other sites
You will probable need this:
#include <iostream>using namespace std;
[edited by - PlayGGY on February 20, 2004 11:15:31 PM]
Share on other sites
And how exactly does this relate to graphics programming ?
Share on other sites
1) This is not a graphics question. This is General Programming. If you had a question specifically about SDL, that would be the OpenGL forum, and if your question was about graphics theory or non-API-specific rendering, that''s for Graphics.
2) printf() is superior to cout. printf can print formatted data, like "%.2f" which prints a number to two decimal places, like "1066.34", or "0x%X" which prints a hex number like "0x1F". The only reason to use cout is that it involves less typing if you don''t care about formatting.
3) You forgot a semicolon at the end of the line that calls cout. Should be:
cout << "Unable to init SDL: " << SDL_GetError() << endl;
4) You forgot to specify "using namespace std" as someone already pointed out. I always forget that too. What makes it more confusing is gcc uses that namespace by default, but VC++ does not.
~CGameProgrammer( );
Share on other sites
quote:
Original post by CGameProgrammer
1) This is not a graphics question. This is General Programming. If you had a question specifically about SDL, that would be the OpenGL forum, and if your question was about graphics theory or non-API-specific rendering, that''s for Graphics.
2) printf() is superior to cout. printf can print formatted data, like "%.2f" which prints a number to two decimal places, like "1066.34", or "0x%X" which prints a hex number like "0x1F". The only reason to use cout is that it involves less typing if you don''t care about formatting.
3) You forgot a semicolon at the end of the line that calls cout. Should be:
cout << "Unable to init SDL: " << SDL_GetError() << endl;
4) You forgot to specify "using namespace std" as someone already pointed out. I always forget that too. What makes it more confusing is gcc uses that namespace by default, but VC++ does not.
~CGameProgrammer( );
You can format with cout too... it is all in the std library.
Share on other sites
cout << "Decimal " << dec << 0xFF;
cout << "Octal " << oct << 0xFF;
cout << "Hex " << hex << 0xFF;
try swapping 0xFF with 255 in these instances as well
love cout, can''t beat simplicity
Share on other sites
quote:
Original post by Pigpen
cout << "Decimal " << dec << 0xFF;
cout << "Octal " << oct << 0xFF;
cout << "Hex " << hex << 0xFF;
try swapping 0xFF with 255 in these instances as well
love cout, can't beat simplicity
I don't get it. Are dec/oct/hex constants that are defined somewhere, like endl, that indicate the following value should be in that format? And I still don't see how it can give you all the formatting printf can (decimal places, mainly). I'm sure it can be done, but probably only by using std::string's functions.
~CGameProgrammer( );
[edited by - CGameProgrammer on February 21, 2004 1:58:24 AM]
Share on other sites
quote:
Original post by CGameProgrammer
I don''t get it. Are dec/oct/hex constants that are defined somewhere, like endl, that indicate the following value should be in that format?
They aren''t constants. They''re manipulators. For formatting numbers, in particular decimal places, try looking up the std::setprecision(), std::setw(), std::setfill(), and std::setiosflags() manipulators. This can all be done without touching std::string.
1. 1
2. 2
JoeJ
20
3. 3
frob
16
4. 4
5. 5
• 10
• 10
• 11
• 13
• 9
• Forum Statistics
• Total Topics
632195
• Total Posts
3004717
×
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24140334129333496, "perplexity": 7811.871670449424}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221217951.76/warc/CC-MAIN-20180821034002-20180821054002-00339.warc.gz"}
|
http://www.gtagaming.com/gtagaming/news/archive.php?p=200614
|
News Archive
GTA: VCS - General News | @ 08:36 PM MDT | By JMR
According to many sources on the interweb, Take Two has filed three trademarks for the much-speculated sequel for Liberty City Stories, Rockstar's first journey into portable criminality.
The three trademarks registered are for "computer game programs", "clothing" (our guess is Vice City Stories-themed t-shirts and the like, such as the San Andreas bandanas), and "entertainment in the production of motion pictures". The last trademark led many to believe that Vice City Stories was actually a movie based on the Grand Theft Auto franchise, yet the last trademark mentions a global network, which is likely the reason why it was filed.
It seems as though the rumors concerning a sequel to LCS named "Vice City Stories" are true! We'll keep you updated.
April 2004 (16 Posts)Week: [ #4 ] May 2004 (66 Posts)Week: [ #1 | #2 | #3 | #4 | #5 ] June 2004 (78 Posts)Week: [ #1 | #2 | #3 | #4 ] July 2004 (80 Posts)Week: [ #1 | #2 | #3 | #4 ] August 2004 (117 Posts)Week: [ #1 | #2 | #3 | #4 | #5 ] September 2004 (66 Posts)Week: [ #1 | #2 | #3 | #4 ] October 2004 (79 Posts)Week: [ #1 | #2 | #3 | #4 | #5 ] November 2004 (19 Posts)Week: [ #1 | #2 | #3 | #4 ] December 2004 (13 Posts)Week: [ #1 | #2 | #3 | #4 ] January 2005 (16 Posts)Week: [ #-51 | #-50 | #-49 | #-48 | #-47 ] February 2005 (8 Posts)Week: [ #2 | #3 | #4 ] March 2005 (18 Posts)Week: [ #1 | #2 | #3 | #4 | #5 ] April 2005 (8 Posts)Week: [ #3 | #4 | #5 ] May 2005 (16 Posts)Week: [ #2 | #3 | #4 | #5 ] June 2005 (28 Posts)Week: [ #1 | #2 | #3 | #4 | #5 ] July 2005 (12 Posts)Week: [ #2 | #3 | #4 | #5 ] August 2005 (18 Posts)Week: [ #1 | #2 | #3 | #4 | #5 ] September 2005 (23 Posts)Week: [ #2 | #3 | #4 | #5 ] October 2005 (19 Posts)Week: [ #2 | #3 | #4 | #5 | #6 ] November 2005 (6 Posts)Week: [ #2 | #4 | #5 ] December 2005 (8 Posts)Week: [ #2 | #4 | #5 ] January 2006 (6 Posts)Week: [ #-50 | #-49 | #-48 | #-47 ] February 2006 (2 Posts)Week: [ #4 | #5 ] March 2006 (3 Posts)Week: [ #2 | #4 | #5 ] April 2006 (3 Posts)Week: [ #2 | #3 | #4 ] May 2006 (9 Posts)Week: [ #1 | #2 | #4 | #5 ] June 2006 (1 Posts)Week: [ #4 ] July 2006 (2 Posts)Week: [ #2 | #5 ] August 2006 (2 Posts)Week: [ #1 | #5 ] September 2006 (6 Posts)Week: [ #2 | #3 | #4 | #5 ] October 2006 (2 Posts)Week: [ #6 ] November 2006 (1 Posts)Week: [ #3 ] December 2006 (2 Posts)Week: [ #5 | #6 ] February 2007 (4 Posts)Week: [ #1 | #4 ] March 2007 (14 Posts)Week: [ #2 | #3 | #4 ] April 2007 (7 Posts)Week: [ #1 | #2 | #3 | #4 ] May 2007 (7 Posts)Week: [ #1 | #3 | #4 ] June 2007 (26 Posts)Week: [ #2 | #3 | #4 ] July 2007 (31 Posts)Week: [ #1 | #2 | #3 | #4 | #5 ] August 2007 (6 Posts)Week: [ #1 | #3 | #4 ] September 2007 (3 Posts)Week: [ #1 | #3 | #4 ] October 2007 (15 Posts)Week: [ #0 | #1 | #2 | #3 | #4 ] November 2007 (5 Posts)Week: [ #1 | #3 | #4 ] December 2007 (39 Posts)Week: [ #1 | #2 | #3 | #4 | #5 ] January 2008 (20 Posts)Week: [ #1 | #2 | #3 | #4 ] February 2008 (38 Posts)Week: [ #1 | #2 | #3 | #4 ] March 2008 (88 Posts)Week: [ #1 | #2 | #3 | #4 | #5 ] April 2008 (105 Posts)Week: [ #1 | #2 | #3 | #4 ] May 2008 (25 Posts)Week: [ #1 | #2 | #3 | #4 ] June 2008 (22 Posts)Week: [ #1 | #2 | #3 | #4 | #5 ] July 2008 (11 Posts)Week: [ #1 | #2 | #3 | #4 ] August 2008 (18 Posts)Week: [ #1 | #2 | #3 | #4 ] September 2008 (16 Posts)Week: [ #1 | #2 | #3 | #4 ] October 2008 (15 Posts)Week: [ #1 | #2 | #3 | #4 ] November 2008 (34 Posts)Week: [ #1 | #2 | #3 | #4 | #5 ] December 2008 (20 Posts)Week: [ #1 | #2 | #3 | #4 ] January 2009 (24 Posts)Week: [ #1 | #2 | #3 | #4 ] February 2009 (28 Posts)Week: [ #1 | #2 | #3 | #4 ] March 2009 (16 Posts)Week: [ #1 | #2 | #3 | #4 ] April 2009 (18 Posts)Week: [ #0 | #1 | #2 | #3 | #4 ] May 2009 (5 Posts)Week: [ #1 | #2 | #3 | #4 ] June 2009 (9 Posts)Week: [ #0 | #2 | #3 ] July 2009 (12 Posts)Week: [ #0 | #1 | #2 | #3 | #4 ] August 2009 (29 Posts)Week: [ #1 | #2 | #3 | #4 | #5 ] September 2009 (25 Posts)Week: [ #1 | #2 | #3 | #4 ] October 2009 (45 Posts)Week: [ #1 | #2 | #3 | #4 ] November 2009 (9 Posts)Week: [ #1 | #2 | #3 ] December 2009 (7 Posts)Week: [ #1 | #2 | #3 | #4 ] January 2010 (8 Posts)Week: [ #-51 | #-50 | #-49 | #-48 ] February 2010 (9 Posts)Week: [ #1 | #2 | #3 ] March 2010 (11 Posts)Week: [ #1 | #2 | #3 | #4 | #5 ] April 2010 (17 Posts)Week: [ #2 | #3 | #4 | #5 ] May 2010 (8 Posts)Week: [ #2 | #3 | #5 | #6 ] June 2010 (7 Posts)Week: [ #2 | #3 | #5 ] July 2010 (1 Posts)Week: [ #2 ] August 2010 (5 Posts)Week: [ #2 | #3 | #5 ] September 2010 (5 Posts)Week: [ #1 | #2 | #4 ] October 2010 (7 Posts)Week: [ #2 | #3 | #5 ] November 2010 (5 Posts)Week: [ #1 | #2 | #3 ] December 2010 (6 Posts)Week: [ #3 | #4 | #5 ] January 2011 (8 Posts)Week: [ #-49 | #-48 | #-47 | #-46 ] February 2011 (3 Posts)Week: [ #2 | #3 ] March 2011 (8 Posts)Week: [ #1 | #2 | #4 | #5 ] April 2011 (14 Posts)Week: [ #2 | #3 | #4 | #5 ] May 2011 (3 Posts)Week: [ #3 | #4 | #5 ] June 2011 (6 Posts)Week: [ #2 | #4 | #5 ] July 2011 (9 Posts)Week: [ #2 | #3 | #4 | #5 ] August 2011 (7 Posts)Week: [ #3 | #4 | #5 ] September 2011 (7 Posts)Week: [ #2 | #3 | #4 | #5 ] October 2011 (12 Posts)Week: [ #2 | #3 | #4 | #5 ] November 2011 (9 Posts)Week: [ #1 | #2 | #3 | #4 ] December 2011 (12 Posts)Week: [ #1 | #2 | #3 | #4 | #5 ] January 2012 (19 Posts)Week: [ #-50 | #-49 | #-48 | #-47 | #-46 ] February 2012 (16 Posts)Week: [ #2 | #3 | #4 | #5 ] March 2012 (15 Posts)Week: [ #2 | #3 | #4 | #5 ] April 2012 (9 Posts)Week: [ #2 | #3 | #4 | #5 ] May 2012 (10 Posts)Week: [ #1 | #2 | #3 | #4 ] June 2012 (5 Posts)Week: [ #1 | #2 | #3 | #5 ] July 2012 (9 Posts)Week: [ #2 | #3 | #4 | #5 | #6 ] August 2012 (10 Posts)Week: [ #2 | #3 | #4 | #5 ] September 2012 (7 Posts)Week: [ #2 | #3 | #4 | #5 ] October 2012 (20 Posts)Week: [ #2 | #3 | #4 | #5 ] November 2012 (32 Posts)Week: [ #2 | #3 | #4 | #5 ] December 2012 (25 Posts)Week: [ #2 | #3 | #4 | #5 | #6 ] January 2013 (11 Posts)Week: [ #1 | #2 | #3 | #4 ] February 2013 (1 Posts)Week: [ #1 ] March 2013 (6 Posts)Week: [ #1 | #2 | #4 ] April 2013 (19 Posts)Week: [ #0 | #1 | #2 | #3 | #4 ] May 2013 (7 Posts)Week: [ #1 | #2 | #3 | #4 ] June 2013 (4 Posts)Week: [ #2 | #3 | #4 ] July 2013 (20 Posts)Week: [ #0 | #1 | #2 | #3 | #4 ] August 2013 (20 Posts)Week: [ #1 | #2 | #3 | #4 ] September 2013 (27 Posts)Week: [ #1 | #2 | #3 | #4 | #5 ] October 2013 (15 Posts)Week: [ #1 | #2 | #3 | #4 ] November 2013 (12 Posts)Week: [ #1 | #2 | #3 | #4 ] December 2013 (12 Posts)Week: [ #1 | #2 | #3 | #5 ] January 2014 (2 Posts)Week: [ #1 ] February 2014 (5 Posts)Week: [ #2 | #3 | #4 ] March 2014 (7 Posts)Week: [ #1 | #2 | #3 | #4 ] April 2014 (6 Posts)Week: [ #0 | #1 | #3 ] May 2014 (4 Posts)Week: [ #0 | #1 | #2 ] June 2014 (3 Posts)Week: [ #2 | #3 ] August 2014 (3 Posts)Week: [ #2 | #3 | #4 ] September 2014 (3 Posts)Week: [ #0 | #1 | #2 ] November 2014 (8 Posts)Week: [ #1 | #2 | #3 | #4 ] December 2014 (2 Posts)Week: [ #2 ] January 2015 (3 Posts)Week: [ #2 ] February 2015 (3 Posts)Week: [ #3 | #4 ] March 2015 (3 Posts)Week: [ #1 ] April 2015 (2 Posts)Week: [ #2 ] May 2015 (1 Posts)Week: [ #1 ] June 2015 (1 Posts)Week: [ #1 ] February 2016 (1 Posts)Week: [ #3 ]
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9799157977104187, "perplexity": 13654.130503174303}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397749.89/warc/CC-MAIN-20160624154957-00190-ip-10-164-35-72.ec2.internal.warc.gz"}
|
http://mathhelpforum.com/differential-geometry/189190-complex-differentiation.html
|
1. Complex differentiation
Hey, I have this problem from a tute i was given,
Its all in the picture,
For part a i just showed the derivatives exist and that they were continuous, then used laplace to show that they are harmonic du/dx - du/dy = 0,
so that work, then to find v i just found the harmonic conjugate and i got -3y^2x+x^3 + c = v
so u+iv = y^3 - 3x^2y +i(3y^2x+x^3 + c )
Is that the complete solution to the question? I feel like I am missing somthing, more of a proof or somthing.
Part b i showed in the picture, i think the first part seems ok, then showing for elsewhere i just let z0 be any point, and tried to find if there were any points besides zero where the limit exists but lim z0/delz is infinity, so is that sufficient to say that that its only differentiable at zero?
We did somthing different aswell, let it be z= u+iv, then f(z) = u^2 + ivu cause Re(z) = u, then let these u^2 and vu be two new variables and applied C-R using implicite differentiation and found somthing different, that it is differentiable when Re(z) = constant or Im(z) = 0
Am i on the right track at all?
2. Re: Complex differentiation
Originally Posted by Daniiel
Hey, I have this problem from a tute i was given,
Its all in the picture,
For part a i just showed the derivatives exist and that they were continuous, then used laplace to show that they are harmonic du/dx - du/dy = 0,
so that work, then to find v i just found the harmonic conjugate and i got -3y^2x+x^3 + c = v
so u+iv = y^3 - 3x^2y +i(3y^2x+x^3 + c ) Note the missing minus sign.
Is that the complete solution to the question? I feel like I am missing something, more of a proof or something.
If z = x+iy and f(z) = u+iv, then I think they want you to express f(z) as a function of z (rather than x and y). With the change of sign that I indicated, you have $f(z) = ix^3 - 3x^2y - 3ixy^2 + y^3.$ Can you see how to write that as a function of z? [Think: Binomial theorem.]
Originally Posted by Daniiel
Part b i showed in the picture, i think the first part seems ok, then showing for elsewhere i just let z0 be any point, and tried to find if there were any points besides zero where the limit exists but lim z0/delz is infinity, so is that sufficient to say that that its only differentiable at zero?
We did somthing different as well, let it be z= u+iv, then f(z) = u^2 + ivu cause Re(z) = u, then let these u^2 and vu be two new variables and applied C-R using implicite differentiation and found somthing different, that it is differentiable when Re(z) = constant or Im(z) = 0
Am i on the right track at all?
The first part of b) is ok. For the second part, you are right to use the C–R equations, but you are getting the notation a bit mixed. If z = x+iy then $f(z) = x^2 + ixy.$ So you should take $u = x^2$ and $v = xy.$ Then apply the C–R equations.
3. Re: Complex differentiation
oh sweet thanks very much, so in terms of z for a, f(z)= iz^3 +ic right?
So for part b, because both C-R equations don't hold it is not differentiable elsewhere? only if x=y=0
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9566742777824402, "perplexity": 829.5459795374439}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542009.32/warc/CC-MAIN-20161202170902-00328-ip-10-31-129-80.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/density-of-an-iceberg.268414/
|
# Density of an iceberg
1. Oct 31, 2008
### Vuldoraq
1. The problem statement, all variables and given/known data
a) If the part of an iceberg above sea level is one ninth of the whole, what is the density of ice?
b)How much of the iceberg would show if it moved into a fresh water region?
2. Relevant equations
Density of sea water=1025kgm^-3
Density of fresh water 1000kgm^-3
Weight displaced=upthrust
Force due to gravity = mg
3. The attempt at a solution
For part a) I equated the upthrust and the force due to gravity, by applying Newtons first law. However I am confused as to whether I should also take into account the atmospheric pressure pressing the part of the iceberg above sea level down. In all the stuff I've read no one seems to take it into account when finding the density of an iceberg. Is there a reason for this?
In part b) I think you again apply Newtons first law and use the ice density calculated in part a),
$$\rho_{ice}*g*v_{ice}=\rho_{water}*g*v_{water}$$
$$\frac{\rho_{ice}}{\rho_{water}}=\frac{v_{water}}{v_{ice}}$$
Which gives the proportion of ice under the water.
2. Oct 31, 2008
Staff Emeritus
If you pick up a piece of paper, do you feel the 1400 pounds of force that air pressure exerts on it?
3. Oct 31, 2008
### Office_Shredder
Staff Emeritus
I do :(
4. Oct 31, 2008
### Vuldoraq
I geuss not, but isn't that because as soon as you pick it up air rushes underneath the paper, very quickly, and the air underneath exerts an equal and oppisite force to the air above, thus we don't feel the pressure force. In the sea this situation is clearly impossible.
Last edited: Oct 31, 2008
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.92151939868927, "perplexity": 780.2298467951233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719041.14/warc/CC-MAIN-20161020183839-00303-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://direct.mit.edu/neco/article-abstract/30/1/125/8332/Capturing-Spike-Variability-in-Noisy-Izhikevich?redirectedFrom=fulltext
|
To understand neural activity, two broad categories of models exist: statistical and dynamical. While statistical models possess rigorous methods for parameter estimation and goodness-of-fit assessment, dynamical models provide mechanistic insight. In general, these two categories of models are separately applied; understanding the relationships between these modeling approaches remains an area of active research. In this letter, we examine this relationship using simulation. To do so, we first generate spike train data from a well-known dynamical model, the Izhikevich neuron, with a noisy input current. We then fit these spike train data with a statistical model (a generalized linear model, GLM, with multiplicative influences of past spiking). For different levels of noise, we show how the GLM captures both the deterministic features of the Izhikevich neuron and the variability driven by the noise. We conclude that the GLM captures essential features of the simulated spike trains, but for near-deterministic spike trains, goodness-of-fit analyses reveal that the model does not fit very well in a statistical sense; the essential random part of the GLM is not captured.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8173683285713196, "perplexity": 951.6760025898499}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662510097.3/warc/CC-MAIN-20220516073101-20220516103101-00562.warc.gz"}
|
https://ajitjadhav.wordpress.com/tag/unprocrastination/
|
# I’ve been slacking, so bye for now, and see you later!
Recently, as I was putting finishing touches in my mind as to how to present the topic of the product states vs. the entangled states in QM, I came to realize that while my answer to that aspect has now come to a stage of being satisfactory [to me], there are any number of other issues on which I am not as immediately clear as I should be—or even used to be! That was frightening!! … Allow me to explain.
QM is hard. QM is challenging. And QM also is vast. Very vast.
In trying to write about my position paper on the foundations of QM, I have been focusing mostly on the axiomatic part of it. In offering illustrative examples, I found, that I have been taking only the simplest possible examples. However, precisely in this process, I have also gone away, and then further away, from the more concrete physics of it. … Let me give you one example.
Why must the imaginary root of the unity i.e. the $i$ appear in the Schrodinger equation? … Recently, I painfully came to realize that I had no real good explanation ready in mind.
It just so happened that I was idly browsing through Eisberg and Resnick’s text “Quantum Physics (of Atoms, Molecules…).” In my random browsing, I happened to glance over section 5.3, p. 134, and was blown over by the argument to this question, presented in there. I must have browsed through this section, years ago, but by now, I had completely forgotten anything about it. … How could I be so dumb as to even forget the fact that here is a great argument about this issue? … Usually, I am able to recall at least the book and the section where an answer to a certain question is given. At least that’s what happens for any of the engineering courses I am teaching. I am easily able to rattle off, for any question posed from any angle, a couple (if not more) books that deal with that particular aspect best. For instance, in teaching FEM: the best treatment on how to generate interpolation polynomials? Heubner (and also Rajasekaran), and only then Zienkiwicz. In teaching CFD: the most concise flux-primary description? Murthy’s notes (at Purdue), and only then followed by Versteeg and Mallasekara. Etc.
… But QM is vast—a bit too vast for me to recall even that much about answers, let alone have also the answers ready in my mind.
Also, around the same time, I ran into these two online resources on UG QM:
1. The course notes at Reed (I suppose by Griffiths himself): [^] and [^]
2. The notes and solved problems here at “Physics pages” [^]. A very neat (and laudable) an effort!
It was the second resource, in particular, which now set me thinking. … Yes, I was aware of it, and might have referred to it earlier on my blog, too. But it was only now that this site set me into thinking…
As a result of that thinking, I’ve decided to do something similar.
I am going to start writing answers at least to questions (and not problems) given in the first 12 or 14 chapters of Eisberg and Resnick’s abovementioned text. I am going to do that before coming to systematically writing my new position paper.
And I am going to undertake this exercise in place of blogging. … It’s important that I do it.
Accordingly, I am ceasing blogging for now.
I am first going to take a rapid first cut at answering at least the (conceptual) questions if not also the (quantitative) problems from Eisberg and Resnick’s book. I would be noting down my answers in an off-line LaTeX document. Tentatively speaking, I have decided to try to get through at least the first 6 chapters of this book, before resuming blogging. In the second phase, it would be chapters 7 through 11 or so, and the rest, in the third phase.
Once I finish the first phase, I may begin sharing my answers here on this blog.
Believe me, this exercise is necessary for me to do.
There certainly are some drawbacks to this procedure. Heisenberg’s formulation (which, historically, occurred before Schrodinger’s) would not receive a good representation. However, that does not mean that I should not be “finishing” this (E&R’s) book either. May be I will have to do a similar exercise (of answering the more conceptual or theoretical questions or drawing notes from) a similar book but on Heisenberg’s approach, too; e.g., “Quantum Mechanics in Simple Matrix Form” by Thomas Jordan [^]. … For the time being, though, I am putting it off to some later time. (Just a hint: As it so happens, my new position is closer—if at all it is that—to the Schrodinger’s “picture” as compared to Heisenberg’s.)
In the meanwhile, if you feel like reading something interesting on QM, do visit the above-mentioned resources. Very highly recommended.
In the meanwhile, take care, and bye for now.
And, oh, just one more thing…
…Just to remind you. Yes, regardless of it all, as mentioned earlier on this blog, even though I won’t be blogging for a while (say a month or more, till I finish the first phase) I would remain completely open to disclosing and discussing my new ideas about QM to any interested PhD physicist, or even an interested and serious PhD student. … If you are one, just drop me a line and let’s see how and when—and assuredly not if—we can meet.
Which Song Do You Like?
Check out your city’s version of Pharrell Williams’ “Happy” song. Also check out a few other cities’. Which one do you like more? Think about it (though I won’t ask you the reasons for your choices!)
OK. Take care, and bye (really) for now…
# Explicit vs. implicit FDM: reference needed
The following is my latest post at iMechanica [^]:
“The context is the finite difference modeling (FDM) of the transient diffusion equation (the linear one: $\dfrac{\partial T}{\partial t} = \alpha \dfrac{\partial^2 T}{\partial x^2}$).
Two approaches are available for modeling the evolution of $T$ in time: (i) explicit and (ii) implicit (e.g., the Crank-Nicolson method).
It was obvious to me that the explicit approach has a local (or compact) support whereas the implicit approach has a global support.
However, with some simple Google searches (and browsing through some 10+ books I could lay my hands on), I could not find any prior paper/text to cite by way of a reference.
I feel sure that it must have appeared in some or the paper (or perhaps even in a text-book); it’s just that I can’t locate it.
So, here is a request: please suggest me a reference where this observation (about the local vs. global support of the solution) is noted explicitly. Thanks in advance.
Best,
–Ajit
[E&OE]”
Self-explanatory, right?
[E&OE]
/
# An introductory course on CFD—3
The Beamer slides for Lecture # 3 are uploaded; see here [^].
This lecture is not directly concerned with CFD, or for that matter, even with fluid dynamics proper. It instead concerns itself with some material that is pre-requisite to both. Namely, the topic of tensors. However, Indian universities don’t cover this topic very well during the earlier, pre-requisite courses—esp. at the UG level. So, I decided to throw in some formulae and a few points concerning vectors and tensors, that’s all.
The material for this lecture, thus, is totally “extra” (at least in a course on CFD). The treatment here is, therefore, very cursory. The idea was to give students at least some background into these topics, by way of a rapid review.
As usual, feel free to point out errors and offer criticism.
Further, this topic being challenging to present to a newcomer in a brief manner, this lecture is the one where I am confident in the least. (At least, it’s the first lecture of this kind.) So, any suggestions for a better presentation would be highly appreciated (though, given my experience of this blog, really speaking, I don’t expect any comments to come in, anyway.)
All the same, I am happy that so much of typing in of these equations is, finally, out of the way!
“Enjoy”!
* * * * * * * * * * * * * * *
A Song I Like:
I still have not yet reviewed and taken a decision as to the song I like, also for this time round.
So, this section will continue to remain suspended.
[E&OE]
/
# An introductory course on CFD—2
The Beamer slides for Lecture # 2 are uploaded, here [^].
As usual, feel free to point out errors and offer criticism.
[I did spot a few typos in the Lecture # 1 (see previous post). These are being corrected, and an improved version will be uploaded some time later.]
Update on 2015.07.16:
For the convenience in updating, I have created a separate page at my personal Web site; it holds all the links to the material for this CFD course.
* * * * * * * * * * * * * * *
A Song I Like:
I have not yet reviewed and taken a decision as to the song I like.
So, this section will remain suspended, for the time being.
[E&OE]
/
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 2, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5927366018295288, "perplexity": 1111.9248582599016}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886106754.4/warc/CC-MAIN-20170820131332-20170820151332-00464.warc.gz"}
|
http://math.stackexchange.com/questions/143575/construct-locally-lipschitz-map-from-a-bounded-one
|
# Construct locally lipschitz map from a bounded one
Let $X$ be a Banach space and $BC(X)$ the space of all bounded closed subsets in $X$. It can be shown that $(BC(X),d_H)$ is a complete metric space (see this page for a definition of $d_H$). If $f:X\to X$ bounded. Does this imply that $F:BC(X)\to BC(X)$ given by $F(A)=\overline{\{f(a):a\in A\}}$ is locally Lipschitz continuous?
And if the map $f: X\to X$ is locally Lipschitz countinuous, is it automaticaly bounded?
-
Does this hold for $f:\mathbb R\to\mathbb R$ given by $f(x)=\sqrt[3]{x}$, which has arbitrarily large derivative near $0$? – Alex Becker May 10 '12 at 17:28
Take $X=\mathbb R$ and $f$ any continuous function on $\mathbb R$. Consider $A$, $B$ given by single points $A= \{a\}$, $B=\{b\}$. Then $F(A) = \{f(a)\}$, $F(B) = \{f(b)\}$, and $d_H(F(A),F(B)) = |f(a) - f(b)|$ while $d_H(A,B) = |a-b|$. So if $F$ is locally Lipschitz, $f$ must also be locally Lipschitz; if $F$ is bounded, $f$ must also be bounded. Any $f$ that is bounded but not locally Lipschitz is a counterexample to your first question. Any $f$ that is locally Lipschitz but not bounded is a counterexample to your second question (assuming the "it" there refers to $F$).
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9971151947975159, "perplexity": 81.9943403395691}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802772757.23/warc/CC-MAIN-20141217075252-00090-ip-10-231-17-201.ec2.internal.warc.gz"}
|
http://aas.org/archives/BAAS/v37n3/dps2005/395.htm
|
37th DPS Meeting, 4-9 September 2005
Session 61 Planetary Rings
Poster, Thursday, September 8, 2005, 6:00-7:15pm, Music Lecture Room 5
## [61.10] Cassini RSS occultation observations of density waves in Saturn's rings
C. A. McGhee, R. G. French (Wellesley Coll.), E. A. Marouf (San Jose State Univ.), N. J. Rappaport (Jet Propulsion Laboratory), P.J. Schinder (Cornell Univ. & NASA GSFC), A. Anabtawi, S. Asmar, E. Barbinis, D. Fleischman, G. Goltz, D. Johnston, D. Rochblatt (Jet Propulsion Laboratory)
On May 3, 2005, the first of a series of eight nearly diametric occultations by Saturn's rings and atmosphere took place, observed by the Cassini Radio Science (RSS) team. Simultaneous high SNR measurements at the Deep Space Network (DSN) at S, X, and Ka bands (\lambda= 13, 3.6, and 0.9 cm) have provided a remarkably detailed look at the radial structure and particle scattering behavior of the rings. By virtue of the relatively large ring opening angle (B=-23.6\circ), the slant path optical depth of the rings was much lower than during the Voyager epoch (B=5.9\circ), making it possible to detect many density waves and other ring features in the Cassini RSS data that were lost in the noise in the Voyager RSS experiment. Ultimately, diffraction correction of the ring optical depth profiles will yield radial resolution as small as tens of meters for the highest SNR data. At Ka band, the Fresnel scale is only 1--1.5 km, and thus even without diffraction correction, the ring profiles show a stunning array of density waves. The A ring is replete with dozens of Pandora and Prometheus inner Lindblad resonance features, and the Janus 2:1 density wave in the B ring is revealed with exceptional clarity for the first time at radio wavelengths. Weaker waves are abundant as well, and multiple occultation chords sample a variety of wave phases. We estimate the surface mass density of the rings from linear density wave models of the weaker waves. For stronger waves, non-linear models are required, providing more accurate estimates of the wave dispersion relation, the ring surface mass density, and the angular momentum exchange between the rings and satellite. We thank the DSN staff for their superb support of these complex observations.
Bulletin of the American Astronomical Society, 37 #3
© 2004. The American Astronomical Soceity.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8022190928459167, "perplexity": 5773.552341390075}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400382386.21/warc/CC-MAIN-20141119123302-00166-ip-10-235-23-156.ec2.internal.warc.gz"}
|
https://primenewswire.com/im5fit/1kwuqg.php?tag=0a5032-sum-of-interior-angles-of-a-hexagon
|
Property Condition Assessment Checklist, Short Funny Animal Video Clips, Cheese Executive Officer Whisps, Evangelos Katsioulis Age, Is Simple Mills Pizza Dough Keto, Ace Academy Lockdown, Naruto - Ninja Destiny Rom, Anl Phone Number, 14 Day Vegan Detox, How Much Snow Did Superior Wi Get, Maps Chords Piano, "/>
## sum of interior angles of a hexagon
So the interior angle sum formula is: Interior Angle Sum = (n - 2)x 180⁰; where n is the number of sides. If you count one exterior angle at each vertex, the sum of the measures of the exterior angles of a polygon is always 360°. You can do this. There is a number of ways to do this and different hexagons will prompt different problem solving strategies. All Rights Reserved. Using the numerical formula above, come up with the formula to calculate the sum of the interior angles of a polygon. 720 degrees. The formula for the sum of that polygon's interior angles is refreshingly simple. If you learn the formula, with the help of formula we can find sum of interior angles of any given polygon. The sum of the interior angle measures of a hexagon is Add the angle measures in each group. Hexagon is defined as the polygonal figure having six sides and six angles. Show Answer. Nonagon. The measure of each interior angle of an equiangular n-gon is. A hexagon is a polygon with six sides. From the above table, the sum of the interior angles of a hexagon is 720$$^\circ$$ Two of the interior angles of the above hexagon are right angles. Regular octagons. sum of total angles = n * 180 -360 = (n-2) * 180. “Now that you have some ideas about how to find the sum of interior angles of a hexagon, extend your strategy to a few other polygons. An interior angle is defined as the angle inside of a polygon made by two adjacent sides. If you count one exterior angle at each vertex, the sum of the measures of the exterior angles of a polygon is always 360°. Since the sum of the interior angles of a triangle is 180°, the sum of the interior angles for the hexagon is 6 × 180° = 1080°. Create your account, The sum of the interior angles of a regular n-gon obeys the formula, $$\color{blue} {S = 180^{\circ} (n-2)} The sum of the measures of the interior angles of a polygon with n sides is (n – 2)180.. Hexagon has 6, so we take 540+180=720. We need to find the sum of interior angle of given hexagon. The whole angle for the quadrilateral. A hexagon need not be regular, so its angles need not all be the same. Services, How to Measure the Angles of a Polygon & Find the Sum, Working Scholars® Bringing Tuition-Free College to the Community. A regularhexagon has: 1. In each case, the angle measures add up to 720, so the answer is that all of these can be the six interior angle measures of a hexagon. Sum Interior Angles$$ \red 3 $$sided polygon (triangle)$$ (\red 3-2) \cdot180 180^{\circ} \red 4 $$sided polygon (quadrilateral)$$ (\red 4-2) \cdot 180 360^{\circ} \red 6 $$sided polygon (hexagon)$$ (\red 6-2) \cdot 180 720^{\circ} $$Problem 1. If we set that equal to the angle formula, (n-2) * 180, we can cancel out the 180 to get: 4 = n-2, or n=6. 900 degrees. Thus, Sum of interior angles of an equilateral triangle = (n-2) x 180° = (3-2) x 180° = 180° Find the sum of the interior angles of a square. So you have n * 180 degrees, However n* 180 degrees has 360 degrees (the sum of the angles at the point chosen within the polygon). Sum of interior angles = (n-2) x 180°, here n = here n = total number of sides. Therefore, the substitutions should be (6-2) *180. Understanding Quadrilaterals. The measure of an exterior angle of a regular... Find the area of the polygon 9 in. Copyright © 2020 Multiply Media, LLC. In the figure if we join BD,BE and BF we get four triangles. The sum of the interior angles of a hexagon must equal 720 degrees. Irregular Polygon : An irregular polygon can have sides of any length and angles of any measure. What is the conflict of the short story sinigang by marby villaceran? A pentagon has 5 sides and when divided, contains 3 triangles. The lesser the number of sides, the lower the sum of the interior angle is. (Hint: factor out 180 first) Type in your response below and set it equal to the sum of the interior angles of a hexagon. A hexagon need not be regular, so its angles need not all be the same. n-sided polygon with n=6 -> hexagon. Heptagon ( or Septagon) 7 sides. The sum of the exterior ang... maths. HD: 720 can also be written as 4 * 180. Any hexagon has: Sum of Interior Angles of 720° 9 diagonals; More Images. 140 degrees. There are seven triangles... Because the sum of the angles of each triangle is 180 degrees... We get .$$. New questions in Mathematics. Join Now. Area = (1.5√3) × s2 , or approximately 2.5980762 × s2(where s=side length) 4. The blue lines above show just one way to divide the hexagon into triangles; there are others. It is easy to see that we can do this for any simple convex polygon. To find the sum of the interior angles of a hexagon, use the formula 180(n-2) where n is 6 so 180(4)=720 degrees. Does pumpkin pie need to be refrigerated? What are the disadvantages of primary group? 720° Interior angles of an n-sided figure sum to 180°(n - 2) For a hexagon, with 6 sides, the interior angles come to . For example: Let us find the missing angle $$x^\circ$$ in the following hexagon. A hexagon is a regular polygon with six equal sides. Why don't libraries smell like bookstores? The sum of the interior angles of an n-sided polygon is: (n - 2) * 180. Also, The sum of all the interior angles of hexagon is 720 degrees. The sum of all the interior angles in a hexagon is equal to 720 degrees. Earn Transferable Credit & Get your Degree, Get access to this video and our entire Q&A library. From the above table, the sum of the interior angles of a hexagon is 720$$^\circ$$ Two of the interior angles of the above hexagon are right angles. Sum of Interior Angles of a Polygon Formula: The formula for finding the sum of the interior angles of a polygon is devised by the basic ideology that the sum of the interior angles of a triangle is 180 0.The sum of the interior angles of a polygon is given by the product of two less than the number of sides of the polygon and the sum of the interior angles of a triangle. answer! Hexagon is defined as the polygonal figure having six sides and six angles. Learn and use the formula to calculate the sum of interior angles in different types of polygons. What is plot of the story Sinigang by Marby Villaceran? 120 degrees. Internal angles of a hexagon The sum of the interior angles of a hexagon equals 720°. So, the sum of the interior angles of a heptagon is 900 degrees. What is the sum of the measures of the interior angles of a hexagon? That means, as we increase the number of sides, we increase the sum of the interior angles. jmjlm1618 jmjlm1618 09/14/2020 Mathematics ... sum of the measures of the interior angles of a hexagon. They are BDC, EBD, FBE and ABF. As we know, by angle sum property of triangle, the sum of interior angles of a triangle is equal to 180 degrees. 29 views Login. Several videos ago I had a figure that looked something like this, I believe it was a pentagon or a hexagon. Exterior Angles of 60° 3. Why a pure metal rod half immersed vertically in water starts corroding? So, the sum of the interior angles of a hexagon is 720 degrees. The sum of the interior angles of a hexagon is 720⁰. This should be minused. Hence, we can say now, if a convex polygon has n sides, then the sum of its interior angle is given by the following formula: A hexagon is a polygon that consists of six straight line segments and six interior angles. The formula for the sum of the interior angles of an n-sided polygon is (n-2)*180. 8 sides. 1080 degrees. The interior angle sum of a polygon formula is equals minus two times one hundred and eighty, where … However, there are regular and irregular hexagons. Sum of the Interior Angles in a Hexagon. Hexagonal nuts and bolts are easy to grip with a wrench, which can be re-positioned every 60° if needed. Who is the longest reigning WWE Champion of all time? So we’re wanting to know the sum of all of the interior angles, which means you wanna add up all of the inside angles together. A hexagon is a regular polygon with six equal sides. Interior Angles of 120° 2. There is a huge hexagon on Saturn, it is wider than Earth. An interior angle is defined as the angle inside of a polygon made by two adjacent sides. Full HD has 1080, and Quad HD has 1440. Let n n equal the number of sides of whatever regular polygon you are studying. The sum of the interior angles of a polygon is directly proportional to the number of sides it has. Because the hexagon is regular, all of the interior angles will have the same measure. Who are the famous writers in region 9 Philippines? Create segments AC, AD, and AE by moving the sliders in red all the way to the right The sum of the interior angles of a polygon is directly proportional to the number of sides it has. A regular octagon is an octagon whose sides are equal in length, and whose interior angles are equal in measure. There are a couple ways you could figure this out. MEDIUM. All rights reserved. interior angle of hexagon formula: the sum of interior angles of a polygon is 2160 degree the number of sides of the polygon is: sum of all the interior angles of a polygon: sum of inner angles of a polygon: sum of all the angles of a polygon: formula of sum of interior angles of a polygon: finding angles in polygons: how to find angle of pentagon Answer. There is a formula for this. 8th. An interior angle is defined as the angle inside of a polygon made by two adjacent sides. Become a Study.com member to unlock this Same thing for an octagon, we take the 900 from before and add another 180, (or another triangle), getting us 1,080 degrees. What are the release dates for The Wonder Pets - 2006 Save the Ladybug? Look at the image opposite. To find the measure of one angle in the regular hexagon, divide that number … How old was queen elizabeth 2 when she became queen? A polygon is a two-dimensional (2D) closed shape with at least 3 straight sides. We have step-by-step solutions for your textbooks written by Bartleby experts! Angle Sum Property. What was the Standard and Poors 500 index on December 31 2007? Regular Hexagons: The properties of regular hexagons: All sides are the same length (congruent) and all interior angles are the same size (congruent). Sum of Interior Angles Formula. Using the numerical formula above, come up with the formula to calculate the sum of the interior angles of a polygon. Sum of Interior Angles of 720° 2. We can find an unknown interior angle of a polygon using the "Sum of Interior Angles Formula". The sum of the measures of the interior angles of a polygon with n sides is (n – 2)180.. Enter your answer in the box. Hexagon. So, the sum of the interior angles of a hexagon is 720 degrees. So, the sum of the interior angles in the simple convex pentagon is 5*180°-360°=900°-360° = 540°. Plus this whole angle, which is going to be c plus y. Click here to get an answer to your question ️ What is the sum of the measures of the interior angles of a hexagon? Find the value of ‘x’ in the figure shown below using the sum of interior angles of a … 120^0 Hexagon is 6 sided. The measure of each interior angle of an equiangular n-gon is. For example, to find out the sum of the interior angles of a hexagon, you would calculate: = (−) × = × = × = So, the sum of the interior angles of a hexagon is 720 degrees. We have sum of all the angles of a triangle = 180 degree Therefore sum of angles in four triangles will be 180 x 4 = 720 degree So the sum of all the angles in a hexagon is 720 degree Think it as 6 triangles. Here is the formula: Sum of interior angles = (n − 2) × 180° S u m o f i n t e r i o r a n g l e s = ( n - 2) × 180 °. As shown in the figure above, three diagonals can be drawn to divide the hexagon into four triangles. What is the conflict of the story sinigang by marby villaceran? HD video has 720 vertical pixels. The interior angle sum of a polygon formula is equals minus two times one hundred and eighty, where is the number of sides. After examining, we can see that the number of triangles is two less than the number of sides, always. And again, try it for the square: they have found the sum of the interior angles for their hexagon, sharing their strategy for solving, or discussing a problem they encountered. Our experts can answer your tough homework and study questions. All other trademarks and copyrights are the property of their respective owners. The sum of the interior angles of a pentagon is 540 degrees. A hexagon has six sides and six interior angles. The sum of the measures of the interior angles of a hexagon is 720° The measures of the angles of a particular hexagon are in the ratio 4:5:5:8:9:9, What are the measure of these angles? © copyright 2003-2020 Study.com. The sum of the internal angle and the external angle on the same vertex is 180°. It is also made of 6 regular triangles! This concept can be extended and generalized. 1260 degrees . Hexagon Sum = 720° All regular polygons are equiangular, therefore, we can find the measure of each interior angle by: | One interior angle of a regular polygon - (n - 2). To find the sum of interior angles of a polygon, multiply the number of triangles in the polygon by 180°. What is the sum of the interior angles of a hexagon? The sum of the interior angles of the hexagon is 720 degrees. 180°(6 - 2) = 180°(4) = 720° 3 6 0 o. Another image of the hexagon on Saturn. … Textbook solution for Geometry, Student Edition 1st Edition McGraw-Hill Chapter 6 Problem 13SGR. 9 diagonals The sum of all the internal angles of a simple polygon is 180(n–2)° where n is the number of sides.The formula can be proved using mathematical induction and starting with a triangle for which the angle sum is 180°, then replacing one side with two sides connected at a vertex, and so on. Algebra Linear Equations Equations with Ratios and Proportions. METHOD 1 - Add the angles of a hexagon to the angles of a heptagon. 128.57 degrees. (6-2) *180 4 *180 720 720 degrees 720 … Sum of interior angles of a polygon with ‘p’ sides is given by: Sum of interior angles = (p - 2) 180° 3060° = (p - 2) 180° p - 2 = $\frac{3060°}{180°}$ p - 2 = 17. p = 17 + 2 p = 19. Sum Interior Angles $$\red 3$$ sided polygon (triangle) $$(\red 3-2) \cdot180$$ $$180^{\circ}$$ $$\red 4$$ sided polygon (quadrilateral) $$(\red 4-2) \cdot 180$$ $$360^{\circ}$$ $$\red 6$$ sided polygon (hexagon) $$(\red 6-2) \cdot 180$$ $$720^{\circ}$$ Problem 1. How long will the footprints on the moon last? A regular hexagon has 6 exterior angles of 60o and 6 interior angles of 120o Therefore the total sum of all the angles of a regular hexagon = (6*60)+(6*120) = 1080 degrees. (6-2) *180. Regular Hexagons: The properties of regular hexagons: All sides are the same length (congruent) and all … Know the formula from which we can find the sum of interior angles of a polygon.I think we all of us know the sum of interior angles of polygons like triangle and quadrilateral.What about remaining different types of polygons, how to know or how to find the sum of interior angles.. The polygon has 19 sides. 135 degrees. B. Maths. • Based on the knowledge that a triangle’s interior angles add up to 180 degrees, group can find the total degrees of the interior angles by splitting the hexagon into triangles. If a hexagon has six equal side lengths and six angles that are also equal, then it is a regular hexagon. What is the sum of the interior angles of a hexagon. (Hint: factor out 180 first) Type in your response below and set it equal to the sum of the interior angles of a hexagon. The sum of the exterior angles of a hexagon is: A. What is the total number degrees of all interior angles of a triangle? Example - sum of interior angles in a pentagon. The sum of the interior angles of the hexagon is 720 degrees. If students already know the sum of interior angles of a When did organ music become associated with baseball? You will always end up with two less triangles than original sides. The sum of the interior angles of a polygon is directly proportional to the number of sides it has. Thus, we get the equation: $90 + 90 + 140+150+130+x=720$ Let us solve this to find $$x$$. Angle Sum of Polygons. Therefore, each angle measures. Below given is the Formula for sum of interior angles of a polygon: If “n” represents the number of sides, then sum of interior angles of a polygon = (n – 2) × { 180 }^ { 0 } 1800 Enter your answer in the box. What is the sum of the interior angles of a hexagon? Pick a point in its interior, connect it to all its sides, get n triangles, and then subtract 360° from the total, giving us the general formula for the sum of interior angles in a simple convex polygon having n sides as: The formula for this is (number of sides - 2) * 180. Interior Angle Theorem: Definition & Formula, Drawing Polygons & Diagonals of Polygons: Lesson for Kids, Geometric Constructions Using Lines and Angles, Interior and Exterior Angles of Triangles: Definition & Examples, How to Find the Number of Diagonals in a Polygon, Concave & Convex Polygons: Definition & Examples, How to Find the Circumradius of a Triangle, Measuring the Area of Regular Polygons: Formula & Examples, Measuring the Area of a Rhombus: Formula & Examples, Median of a Trapezoid: Definition & Theorem, Quadrilaterals Inscribed in a Circle: Opposite Angles Theorem, NY Regents Exam - Geometry: Tutoring Solution, SAT Subject Test Mathematics Level 2: Practice and Study Guide, NY Regents Exam - Geometry: Help and Review, Common Core Math - Geometry: High School Standards, McDougal Littell Geometry: Online Textbook Help, Prentice Hall Geometry: Online Textbook Help, BITSAT Exam - Math: Study Guide & Test Prep, NY Regents Exam - Geometry: Test Prep & Practice, 9th Grade English: Homework Help Resource, TExES Mathematics 7-12 (235): Practice & Study Guide, High School World History: Help and Review, Biological and Biomedical Part III: Extension • Give each group 2 heptagons, and 2 decagons (Appendix C). So we’re wanting to know the sum of all of the interior angles, which means you wanna add up all of the inside angles together. There is a formula for this. Click hereto get an answer to your question ️ The sum of the exterior angles of a hexagon is: LEARNING APP; ANSWR; CODR; XPLOR; SCHOOL OS; answr. A hexagon is a polygon with six sides. Sum of angles in a triangle. Sciences, Culinary Arts and Personal Octagon. Sum of Interior Angles of a polygon = 180 ×(n−2) degrees, where n is number of sides. The material on this site can not be reproduced, distributed, transmitted, cached or otherwise used, except with prior written permission of Multiply.
Recent Posts
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27643176913261414, "perplexity": 383.98242934752005}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487610196.46/warc/CC-MAIN-20210613161945-20210613191945-00592.warc.gz"}
|
https://indico.psi.ch/event/6857/contributions/19022/
|
# Physics of fundamental Symmetries and Interactions - PSI2019
Oct 20 – 25, 2019
PSI
Europe/Zurich timezone
## Update on Commissioning and Development of Cryogenic SOS@PULSTAR apparatus
Oct 22, 2019, 5:56 PM
1m
WHGA/001 - Auditorium (PSI)
Poster
### Speaker
Dr Ekaterina Korobkina (North Carolina State University)
### Description
Measuring particle EDMs is one of the most challenging experiments in the field of high precision physics. Present neutron EDM experiments are approaching limits of the traditional measurement technique due to both, statistic and systematic limitations. nEDM@SNS collaboration is working on realization of new approach, which employs production of trapped neutrons and measurement of neutron polarization in LHe environment below 0.5K with use of polarized He-3 as both, neutron polarization detector and co-magnetometer. This technique potentially is restricted only by neutron beam intensity. Realization of the technique relies on simultaneous precision spin manipulations of both, neutron and He-3 atoms. To start practical realization of the spin manipulation system, we have designed a smaller cryogenic NMR system, which is now undergoing commissioning at NC State University. We describe the goals and methods of the project snd report on recent progress.
### Primary author
Dr Ekaterina Korobkina (North Carolina State University)
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8460351228713989, "perplexity": 7438.292107684135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587877.85/warc/CC-MAIN-20211026103840-20211026133840-00671.warc.gz"}
|
http://www.physicsforums.com/showthread.php?s=2e4c767208829914170bafd658d099cc&p=4583590
|
# The role of phonons in momentum conservation
by hokhani
Tags: conservation, momentum, phonons, role
P: 265 In an indirect transition from the valence band maximum to conduction band minimum, the momentum of electron and hole would not change but the crystal momentum would change and this change is supplied by phonons.I have two questions here: 1) phonons don't carry momentum so how they can transfer thier momentum to the crystal? 2) phonons are part of the crystal. why do we separate their momentum from crystal momentum?
Sci Advisor P: 3,560 Phonons don't carry momentum but they carry crystal momentum which are two completely different things.
P: 265
Quote by DrDu Phonons don't carry momentum but they carry crystal momentum which are two completely different things.
Ok, but crystal momentum's change is sum of electron momentum's change and momentum change of the crystal;Ok? If yes, in the explained situation electron momentum is not changed so the momentum of crystal must be changed by phonons!
Sci Advisor P: 3,560 The role of phonons in momentum conservation In an indirect transition, the electrons crystal momentum changes this is compensated by a change of crystal momentum of the phonon. I don't see any problem here. The true momentum of the electron doesn't interest anyone in that context, as it isn't in a momentum eigenstate anyhow.
P: 265
Quote by DrDu The true momentum of the electron doesn't interest anyone in that context, as it isn't in a momentum eigenstate anyhow.
Yes, But what I meant was the mean value of true momentum of electron which is "mass of electron times its group velocity".
Sci Advisor P: 3,560 As I already explained in another thread, the lattice itself (as opposed to the phonons) can take up arbitrary amounts of momentum, so momentum conservation is always trivial.
Related Discussions Atomic, Solid State, Comp. Physics 18 Quantum Physics 9 General Physics 1 Classical Physics 2 General Engineering 0
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8050955533981323, "perplexity": 1328.6109491052678}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510264270.11/warc/CC-MAIN-20140728011744-00194-ip-10-146-231-18.ec2.internal.warc.gz"}
|
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=18&t=22049&p=63695
|
## Post-Module Assessment Q. 34
$\lambda=\frac{h}{p}$
William Lan 2l
Posts: 73
Joined: Fri Sep 29, 2017 7:07 am
### Post-Module Assessment Q. 34
If an electron (mass 9.11 x 10^-31 kg) has an associated wavelength of 7.28 x 10^-9 m, what is its speed? Is your answer reasonable, why?
A. 1.00 x 10-5 m.s-1. Yes. 1.00 x 10-5 m.s-1 is reasonable for e- as it is less than the speed of light, c = 3.0 x 108m.s-1.
B.1.00 x 10-5 m.s-1. No. 1.00 x 10-5 m.s-1 is not reasonable for e- as it is significantly slower than the speed of light, c = 3.0 x 108 m.s-1.
C. 1.00 x 105 m.s-1 . Yes. 1.00 x 105 m.s-1 is reasonable for e- as it is less than the speed of light, c = 3.0 x 108m.s-1.
D. 1.00 x 105 m.s-1. No. 1.00 x 105 m.s-1 is not reasonable for e- as it is significantly slower than the speed of light, c = 3.0 x 108 m.s-1.
So how exactly do you do this problem? I used lambda = h/mv equation and I switched it around so that v= h/(m x lambda). When I solved for V, I got 99939 m/s, which is none of the answers.
Chem_Mod
Posts: 17949
Joined: Thu Aug 04, 2011 1:53 pm
Has upvoted: 406 times
### Re: Post-Module Assessment Q. 34
When you do the calculation you did, you should get 1*10^5 m/s which is one of the answers given. Your formula is correct, but there is a mistake with the calculation.
Yashaswi Dis 1K
Posts: 56
Joined: Fri Sep 29, 2017 7:04 am
### Re: Post-Module Assessment Q. 34
I think your answer is like correct, meaning approx. close to the answers given b/c when you round the answer you got and change it to scientific notation, you should get 1.00 * 10^5 m/s and since it's way less than speed of light, which if the fastest speed known so far, answer should be reasonable. Again, check the video module maybe because it has more specifics on that. Hope this helps!
Lily Guo 1D
Posts: 64
Joined: Fri Sep 29, 2017 7:03 am
### Re: Post-Module Assessment Q. 34
Yashaswi Dis H wrote:I think your answer is like correct, meaning approx. close to the answers given b/c when you round the answer you got and change it to scientific notation, you should get 1.00 * 10^5 m/s and since it's way less than speed of light, which if the fastest speed known so far, answer should be reasonable. Again, check the video module maybe because it has more specifics on that. Hope this helps!
How do you know that the speed of the electron is reasonably fast or not? Is there a specific range of values that the speed should fall in? I got ~1.00 x 10^5 m/s, I'm just not sure if this is a reasonable speed or not since it's so much less than the speed of light.
Yashaswi Dis 1K
Posts: 56
Joined: Fri Sep 29, 2017 7:04 am
### Re: Post-Module Assessment Q. 34
I am not exactly sure but as far as I know, the speed of light is the fastest speed on earth, unless research shows otherwise. Thus, if you get a speed less than 3.00 * 10^8 m/s, I am pretty sure it's reasonable b/c it's less than speed of light, which is the fastest speed known so far. That's my thought process when I answered the module questions and it helped me out...so hope this helps!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8728423118591309, "perplexity": 1155.0839212297847}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540528490.48/warc/CC-MAIN-20191210180555-20191210204555-00255.warc.gz"}
|
https://ms609.github.io/TreeDist/reference/NyeSimilarity.html
|
NyeSimilarity() and NyeSplitSimilarity() implement the Generalized Robinson–Foulds tree comparison metric of Nye et al. (2006). In short, this finds the optimal matching that pairs each branch from one tree with a branch in the second, where matchings are scored according to the size of the largest split that is consistent with both of them, normalized against the Jaccard index. A more detailed account is available in the vignettes.
NyeSimilarity(
tree1,
tree2 = NULL,
similarity = TRUE,
normalize = FALSE,
normalizeMax = !is.logical(normalize),
reportMatching = FALSE,
diag = TRUE
)
NyeSplitSimilarity(
splits1,
splits2,
nTip = attr(splits1, "nTip"),
reportMatching = FALSE
)
## Arguments
tree1 Trees of class phylo, with leaves labelled identically, or lists of such trees to undergo pairwise comparison. Where implemented, tree2 = NULL will compute distances between each pair of trees in the list tree1 using a fast algorithm based on Day (1985). Trees of class phylo, with leaves labelled identically, or lists of such trees to undergo pairwise comparison. Where implemented, tree2 = NULL will compute distances between each pair of trees in the list tree1 using a fast algorithm based on Day (1985). Logical specifying whether to report the result as a tree similarity, rather than a difference. If a numeric value is provided, this will be used as a maximum value against which to rescale results. If TRUE, results will be rescaled against a maximum value calculated from the specified tree sizes and topology, as specified in the 'Normalization' section below. If FALSE, results will not be rescaled. When calculating similarity, normalize against the maximum number of splits that could have been present (TRUE), or the number of splits that were actually observed (FALSE)? Defaults to the number of splits in the better-resolved tree; set normalize = pmin.int to use the number of splits in the less resolved tree. Logical specifying whether to return the clade matchings as an attribute of the score. Logical specifying whether to return similarities along the diagonal, i.e. of each tree with itself. Applies only if tree2 is a list identical to tree1, or NULL. Logical matrices where each row corresponds to a leaf, either listed in the same order or bearing identical names (in any sequence), and each column corresponds to a split, such that each leaf is identified as a member of the ingroup (TRUE) or outgroup (FALSE) of the respective split. Logical matrices where each row corresponds to a leaf, either listed in the same order or bearing identical names (in any sequence), and each column corresponds to a split, such that each leaf is identified as a member of the ingroup (TRUE) or outgroup (FALSE) of the respective split. (Optional) Integer specifying the number of leaves in each split.
## Value
NyeSimilarity() returns an array of numerics providing the distances between each pair of trees in tree1 and tree2, or splits1 and splits2.
## Details
The measure is defined as a similarity score. If similarity = FALSE, the similarity score will be converted into a distance by doubling it and subtracting it from the number of splits present in both trees. This ensures consistency with JaccardRobinsonFoulds.
Note that NyeSimilarity(tree1, tree2) is equivalent to, but slightly faster than, JaccardRobinsonFoulds (tree1, tree2, k = 1, allowConflict = TRUE).
## Normalization
If normalize = TRUE and similarity = TRUE, then results will be rescaled from zero to one by dividing by the mean number of splits in the two trees being compared.
You may wish to normalize instead against the number of splits present in the smaller tree, which represents the maximum value possible for a pair of trees with the specified topologies (normalize = pmin.int); the number of splits in the most resolved tree (normalize = pmax.int); or the maximum value possible for any pair of trees with n leaves, n - 3 (normalize = TreeTools::NTip(tree1) - 3L).
If normalize = TRUE and similarity = FALSE, then results will be rescaled from zero to one by dividing by the total number of splits in the pair of trees being considered.
## References
Nye TMW, Liò P, Gilks WR (2006). “A novel algorithm and web-based tool for comparing two alternative phylogenetic trees.” Bioinformatics, 22(1), 117--119. doi: 10.1093/bioinformatics/bti720 , https://doi.org/10.1093/bioinformatics/bti720.
Other tree distances: JaccardRobinsonFoulds(), KendallColijn(), MASTSize(), MatchingSplitDistance(), NNIDist(), PathDist(), Robinson-Foulds, SPRDist(), TreeDistance()
## Examples
library('TreeTools')
NyeSimilarity(BalancedTree(8), PectinateTree(8))
#> [1] 3.8VisualizeMatching(NyeSimilarity ,BalancedTree(8), PectinateTree(8))
NyeSimilarity(as.phylo(0:5, nTip = 8), PectinateTree(8))
#> [1] 3.166667 2.750000 2.750000 2.500000 2.450000 2.500000NyeSimilarity(as.phylo(0:5, nTip = 8), similarity = FALSE)
#> 1 2 3 4 5
#> 2 1.333333
#> 3 1.333333 1.333333
#> 4 2.166667 2.333333 2.333333
#> 5 2.333333 2.166667 2.333333 1.000000
#> 6 2.000000 2.000000 1.500000 1.500000 1.500000
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5725595951080322, "perplexity": 1834.248306215051}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487660269.75/warc/CC-MAIN-20210620084505-20210620114505-00296.warc.gz"}
|
http://gna.org/bugs/?22075
|
## BRILLANT - Bugs: bug #22075, Wrong LaTeX macro for \notin B...
Show feedback again
You are not allowed to post comments on this tracker with your current authentification level.
## bug #22075: Wrong LaTeX macro for \notin B symbol
Submitted by: Piotr Trojanek Submitted on: Fri 23 May 2014 11:06:55 AM UTC Category: None Module: Docs Priority: 5 - Normal Severity: 2 - Minor Status: Confirmed Privacy: Public Assigned to: None Open/Closed: Open
Mon 01 Sep 2014 04:16:24 PM UTC, SVN revision 1498:
Fix bug #22075
Samuel Colin <scolin>
Tue 03 Jun 2014 08:51:19 AM UTC, comment #2:
I took a look, the bug is confirmed.
The correct ASCII syntax for \notin is /: (as is described in the B reference manual or as is seen in the examples of bbench/)
As for duplicates, there are indeed three files:
./bcaml/toolchain/btyper2/lib/doc/B2.sty : probably the most recent
./docs/brillant-bcaml/B2.sty
./latex/B2/B2.sty : the oldest
Fixing this would entail:
- Replacing :/ by /:
- Integrate the event-B keywords of docs/brillant-bcaml/B2.sty into bcaml/toolchain/btyper2/lib/doc/B2.sty (saw that with a quick diff, but there might be a few more macros to integrate)
- Copy bcaml/toolchain/btyper2/lib/doc/B2.sty as latex/B2/B2.sty
- Symlink the other files to latex/B2/B2.sty
Samuel Colin <scolin>
Tue 27 May 2014 08:54:28 AM UTC, comment #1:
Hello Piotr
Sorry I don't understand clearly your bug report.
find . -name B2.sty | xargs grep notin
gives me
./docs/brillant-bcaml/B2.sty:{:/}{\$\notin\$}{1}%
./bcaml-0.6/toolchain/btyper2/lib/doc/B2.sty:{:/}{\$\notin\$}{1}%
./latex/B2/B2.sty:{:/}{\$\notin\$}{1}%
with the correct assignement :/ in the three cases...
B2.sty was designed by Samuel to be used in the context of the "listing" LaTeX package. The associated test.tex file is compiled correctly (right typesetting of notin...)
Right. There should be only one B2.styn the one located in latex/B2...
PS : do you want to be added in the devs group for BRILLANT so that you can commit your ideas/corrections ...? (just ask)
Georges Mariano <gmariano>
Fri 23 May 2014 11:06:55 AM UTC, original submission:
All three B2.sty files in the repository define the \notin symbol as ":/" instead of "/:".
First, I am not sure why there are three B2.sty files instead of one. Second, I am not sure if there is some hidden reason for using the current notation. Anyway, it is trivial to fix.
Piotr Trojanek <ptroja>
No files currently attached
Depends on the following items: None found
Items that depend on this one: None found
Carbon-Copy List
• -unavailable- added by scolin (Posted a comment)
• -unavailable- added by gmariano (Posted a comment)
• -unavailable- added by ptroja (Submitted the item)
•
Do you think this task is very important?
This task has 0 encouragements so far.
Only logged-in users can vote.
Please enter the title of George Orwell's famous dystopian book (it's a date):
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8977296948432922, "perplexity": 13613.974148296516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375098987.83/warc/CC-MAIN-20150627031818-00163-ip-10-179-60-89.ec2.internal.warc.gz"}
|
http://mathhelpforum.com/discrete-math/7398-countable-set.html
|
# Math Help - Countable set
1. ## Countable set
hello.
I cant seem to prove the following:
$A,B$ are two different sets such that $A \subset B$
prove that if there exists an injective function $f:A\to B$.
then there exists a countable infinite set $C \subset A$
(C and A can overlap)
can someone give me some hints?
thanks.
2. Originally Posted by parallel
hello.
I cant seem to prove the following:
$A,B$ are two different sets such that $A \subset B$
prove that if there exists an injective function $f:A\to B$.
then there exists a countable infinite set $C \subset A$
(C and A can overlap)
can someone give me some hints?
thanks.
There's something wrong here. First of all there is nothing in your statement to force A to be at least countably infinite, which is a requirement for C to be countably infinite.
For a counter-example let A = {1, 2, 3} and B = {1, 2, 3, 4, 5, 6, 7}. Define an injection $f: A \to B: a \mapsto a$. Then $A \subset B$ and f is an injection, but there exists no set $C \subset A$ that is countably infinite.
-Dan
3. I'm sorry,I typed it incorrectly
should be:
prove that if there exists an injective function $f:B\to A$.
then there exists a countable infinite set $C \subset A$
(C and A can overlap).
thanks
4. Originally Posted by parallel
I'm sorry,I typed it incorrectly
should be:
prove that if there exists an injective function $f:B\to A$.
then there exists a countable infinite set $C \subset A$
(C and A can overlap).
thanks
Which is again not true if both these sets are finite.
5. o.k so I think we can assume they are infinite(I think it's obvious),although I dont even have a clew,how to even start proving this.
6. If we were given that B is infinite then the statement is true.
We can prove that any infinite set has a countablely infinite subset call it $
D \subseteq B$
.
The subset of A you want is $\overrightarrow f (D)$, that is the image of D under f.
7. If $B$ is infinite this implies that $A$ is also infinite and at least as large as $B$ because of the injective map. Now, the property of the integers say there are contained up to cardinality in any infinite set, thus, there exist an injection
$i:\mathbb{Z}\to C$
Then the image of the function,
$i[\mathbb{Z}]\subseteq C$
(I hope that is what you mean by overlap).
8. I dont understand it.
I know that inorder to prove a set is countable,I need to show that there exists an injective function from this set to N(positive integers)
thanks
9. Originally Posted by parallel
I dont understand it.
I know that inorder to prove a set is countable,I need to show that there exists an injective function from this set to N(positive integers)
thanks
I assume that you are told $B$ is an infinite set.
1) $A$ is an infinite set also because the injection from $f:B\to A$ implies that $|B|\leq |A|$ in cardinality. So it must be infinite becuase it is larger than another infinite set (it cannot be finite because any infinite set is larger than any finite set).
2)Therefore, there exists an injection map $i: \mathbb{Z}\to A$. Because $|\mathbb{Z}|$ is the smallest possible infinite set which implies its size is contained in any infinite set. Thus, we state that in terms of an injection (since injection shows that one set is less than another in cardinality).
3)The image of the injection $i [ \mathbb{Z}]$ (image of a function, you should know what that is) is an infinite set that is the size of the integers and is contained in $A$. Thus, $i[\mathbb{Z}]\subset A$ which proves what you were asking.
10. Originally Posted by ThePerfectHacker
1) $A$ is an infinite set also because the injection from.
While that is perfectually true, I think that is the point of this problem.
In other words, I think that is what the student is asked to prove.
11. Thank you very much Plato.
12. Originally Posted by Plato
While that is perfectually true, I think that is the point of this problem.
In other words, I think that is what the student is asked to prove.
I can prove that! If that is what he wants.
I shall show there cannot exist a surjection,
$s: F\to I$
where $F$ is finite and $I$ is infinite.
Assume there is one.
Then since $I$ is infinite $\exists I' \subset I, |I'|=|I|$ (definition of infinity).
Then, the inverse image,
$s^{-1}[I']$ is a proper subset of $F$ (because that set was proper in the infinite set, the inverse image preserve this). But since $|I|=|I'|$ we have $|s^{-1}[I']|=|F|$. Thus, there exists a proper subset of a finite set having the same cardinality which is impossible by the definition of what finiteness is.
13. This is another case where the uniformity of mathematical definitions fails.
In the above, the definition of ‘infinite set’ was used.
Well which definition. This makes hard to help with knowing the text being followed.
There are at least three or maybe four popular definitions:
A set is infinite if it is not finite. A finite set is equipotent to a natural number.
A set is infinite if it is equipotent to a proper subset of itself (called Dedekind infinite).
A set is infinite if it contains a copy of the natural numbers.
I suspect that the purpose of the question was to show that “If an infinite set is mapped injectively of another set then the final set must also be infinite.” From the wording it is difficult to know what definition is being used. In any case my first response works.
14. Originally Posted by Plato
This is another case where the uniformity of mathematical definitions fails.
In the above, the definition of ‘infinite set’ was used.
Well which definition. This makes hard to help with knowing the text being followed.
There are at least three or maybe four popular definitions:
A set is infinite if it is not finite. A finite set is equipotent to a natural number.
A set is infinite if it is equipotent to a proper subset of itself (called Dedekind infinite).
A set is infinite if it contains a copy of the natural numbers.
I suspect that the purpose of the question was to show that “If an infinite set is mapped injectively of another set then the final set must also be infinite.” From the wording it is difficult to know what definition is being used. In any case my first response works.
But all of these definitions are equivalent!
Makes no difference what to use.
(I am only familar with Dedekind infinite).
15. Originally Posted by ThePerfectHacker
But all of these definitions are equivalent!
Makes no difference what to use.
But the definition in use changes the approach to the proof.
That was my point.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 41, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9790800213813782, "perplexity": 280.7844783896968}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096944.75/warc/CC-MAIN-20150627031816-00015-ip-10-179-60-89.ec2.internal.warc.gz"}
|
http://www.ipam.ucla.edu/abstract/?tid=8800&pcode=CMAWS2
|
## A Geometric Interpretation of the Characteristic Polynomial of a Hyperplane Arrangement
#### Caroline KlivansUniversity of ChicagoMathematics and Computer Science
We consider projections of points in $R^n$ onto chambers of real linear hyperplane arrangements. We show that the coefficients of the characteristic polynomial are proportional to the average spherical volumes of the sets of points that are projected onto faces of a given dimension. As a corollary we obtain that for real finite reflection arrangements the coefficients of the characteristic polynomial precisely give the spherical volumes of points projected onto faces of a fixed dimension of the fundamental chamber. The connection between projection volumes and the characteristic polynomial is established by considering angle sums of the associated zonotope.
This talk reflects joint work with Mathias Drton and Ed Swartz.
Back to Workshop II: Combinatorial Geometry
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9568727612495422, "perplexity": 534.1239231051809}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347396300.22/warc/CC-MAIN-20200527235451-20200528025451-00348.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/calculus/university-calculus-early-transcendentals-3rd-edition/chapter-2-section-2-6-limits-involving-infinity-asymptotes-of-graphs-exercises-page-108/63
|
## University Calculus: Early Transcendentals (3rd Edition)
- Horizontal asymptote: $y=0$ - Vertical asymptote: $x=1$ As $x\to\pm\infty$, the dominant term is $0$. As $x\to1$, the dominant term is $1/(x-1)$.
$$y=\frac{1}{x-1}$$ We are interested in the behavior of function $y$ as $x\to\pm\infty$ as well as the behavior of $y$ as $x\to1$, which is where the denominator is $0$. We can rewrite the function into a polynomial with a remainder as follows: $$y=\frac{1}{x-1}=0+\frac{1}{x-1}$$ - As $x\to\pm\infty$, $(x-1)$ would get infinitely large, meaning that the curve will approach the line $y=0$, which is also the horizontal asymptote. Also, as $x$ becomes infinitely large, $1/(x-1)$ will become closer to $0$. So we can say the dominant term is $0$ as $x\to\pm\infty$. - As $x\to1$, $(x-1)$ would approach $0$, meaning that the curve will approach $\infty$. So $x=1$ is the vertical asymptote. And as $x\to1$, $1/(x-1)$ becomes infinitely large. So the dominant term as $x\to1$ is $1/(x-1)$. The graph is shown below. The red curve is the graph of $y$, while the blue line is the horizontal asymptote $y=0$ and the green one is the vertical asymptote $x=1$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9924293160438538, "perplexity": 86.7490633108112}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665985.40/warc/CC-MAIN-20191113035916-20191113063916-00397.warc.gz"}
|
http://www.logicmatters.net/2013/07/28/the-contemporary-conception-of-logic/
|
# ‘The contemporary conception of logic’
Warren Goldfarb, in his paper ‘Frege’s conception of logic’ in The Cambridge Companion to Frege, announces that his ‘first task is that of delineating the differences between Frege’s conception of logic and the contemporary one’. And it is not a new idea that there are important contrasts to be drawn between Frege’s approach and some modern views of logic. But one thing that immediately catches the eye in Goldfarb’s prospectus is his reference to the contemporary conception of logic. And that should surely give us some pause, even before reading on.
So how does Goldfarb characterize this uniform contemporary conception? It holds, supposedly, that
the subject matter of logic consists of logical properties of sentences and logical relations among sentences. Sentences have such properties and bear such relations to each other by dint of their having the logical forms they do. Hence, logical properties and relations are defined by way of the logical forms; logic deals with what is common to and can be abstracted from different sentences. Logical forms are not mysterious quasi-entities, à la Russell. Rather, they are simply schemata: representations of the composition of the sentences, constructed from the logical signs (quantifiers and truth-functional connectives, in the standard case) using schematic letters of various sorts (predicate, sentence, and function letters). Schemata do not state anything and so are neither true nor false, but they can be interpreted: a universe of discourse is assigned to the quantifiers, predicate letters are replaced by predicates or assigned extensions (of the appropriate arities) over the universe, sentence letters can be replaced by sentences or assigned truth-values. Under interpretation, a schema will receive a truth-value. We may then define: a schema is valid if and only if it is true under every interpretation; one schema implies another, that is, the second schema is a logical consequence of the first, if and only if every interpretation that makes the first true also makes the second true. A more general notion of logical consequence, between sets of schemata and a schema, may be defined similarly. Finally, we may arrive at the logical properties or relations between sentences thus: a sentence is logically true if and only if it can be schematized by a schema that is valid; one sentence implies another if they can be schematized by schemata the first of which implies the second.
Note an oddity here (something Timothy Smiley has complained about in another context). It is said that a ‘logical form’ just is a schema. So what is it then for a sentence to have a logical form (as you can’t have a schema): presumably it is for the sentence to be an instance of the schema. But the sentence ‘Either grass is green or grass is not green’ — at least once we pre-process it as ‘Grass is green $\lor\ \neg$grass is green’ — is an instance of both the schema $P \lor \neg P$ and the schema $Q \lor \neg Q$. These are two different schemata: but surely no contemporary logician when thinking straight would say that the given sentence, for this reason at any rate, has two different logical forms. So something is amiss.
But let’s hang fire on this point. The more immediate question is: just how widely endorsed is something like Goldfarb’s described conception of logic? For evidence, we can take a look at some well-regarded math. logic textbooks from the modern era, i.e. from the last fifty years — which, I agree, is construing ‘contemporary’ rather generously (but not to Goldfarb’s disadvantage). We’d need to consider e.g. how the various authors regard formal languages, what they take logical relations to hold between, how they regard the letters which appear in logical formulae, what accounts they give of logical laws and logical consequence, and how they regard formal proofs. To be sure, we might expect to find many recurrent themes running through different modern treatments (after all, there is only a limited number of options). But will we find enough commonality to make it appropriate to talk of `the’ contemporary conception of logic?
Of course, I hope it will be agreed that this question is interesting in its own right: I’m really just using Goldfarb as a provocation go on the required trawl through the literature. I’ve picked off my shelves a dozen or so textbooks from Mendelson (1962) to (say) Chiswell and Hodges (2007), and it will be interesting to see how many share the view of logic which Goldfarb describes.
Preliminary report: to my surprise (as it isn’t how I remembered it) Mendelson’s conception of logic does fit Goldfarb’s account very well. At the propositional level, tautologies for Mendelson are a kind of schema (so aren’t true!); logical consequence is defined as holding between schemata; Mendelson’s formal theory is a theory for deriving schemata. Likewise, charitably read, for his treatment of quantificational logic. Moreover Mendelson avoids the unnecessary trouble that Goldfarb gets himself into when he talks of logical form: Mendelson too talks of logical structure, but he supposes that this is ‘made apparent’ by using statement forms, not that it is to be identified with statement forms. So 1/1 for Goldfarb.
So far so good. But chronologically the next book I’ve looked at is ‘little Kleene’ , i.e. Kleene’s Mathematical Logic (1967). And Goldfarb’s account doesn’t apply to this. For a start, Kleene’s $P \lor \neg P$ is not schematic but picks out a truth in some object language fixed in the context, as it might be Jack loves Jill or Jack doesn’t love Jill or $3 < 5 \lor 3 \not< 5$ Which (to cut the story short) makes the score 1/2.
I’ll let you know what the score is when I’ve looked at the other texts on my list …
This entry was posted in Logic. Bookmark the permalink.
### One Response to ‘The contemporary conception of logic’
1. Clark says:
I look forward with interest to your review of the dividing line between syntax and semantics in various authors. Those adhering to a somewhat formalist view might not accept your example of difference in schemata, however. For instance Boolos in Provability on page 3 introduces the principle of substitution inductively, which would include the two examples in a single schema.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 5, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5578949451446533, "perplexity": 1324.7556621019041}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096780.24/warc/CC-MAIN-20150627031816-00234-ip-10-179-60-89.ec2.internal.warc.gz"}
|
https://myelectrical.com/notes/entryid/241/dielectric-loss-in-cables
|
# Dielectric loss in cables
By on
Cable cross section showing
insulation
Dielectrics (insulating materials for example) when subjected to a varying electric field, will have some energy loss. The varying electric field causes small realignment of weakly bonded molecules, which lead to the production of heat. The amount of loss increases as the voltage level is increased. For low voltage cables, the loss is usually insignificant and is generally ignored. For higher voltage cables, the loss and heat generated can become important and needs to be taken into consideration.
Dielectrics (insulating materials for example) when subjected to a varying electric field, will have some energy loss. The varying electric field causes small realignment of weakly bonded molecules, which lead to the production of heat. The amount of loss increases as the voltage level is increased. For low voltage cables, the loss is usually insignificant and is generally ignored. For higher voltage cables, the loss and heat generated can become important and needs to be taken into consideration.
Dielectric loss is measured using what is known as the loss tangent or tan delta (tan δ). In simple terms, tan delta is the tangent of the angle between the alternating field vector and the loss component of the material. The higher the value of tan δ the greater the dielectric loss will be. For a list of tan δ values for different insulating material, please see the Cable Insulation Properties note.
Note: in d.c. cables with a static electric field, there is no dielectric loss. Hence the consideration of dielectric loss only applies to a.c. cables.
## Cable Voltage
Dielectric loss only really become significant and needs to be taken into account at higher voltages. IEC 60287 "Electric Cables - Calculation of the current rating", suggests that dielectric loss need only be considered for cables above the following voltage levels:
Cable Type U0, kV
Butyl Rubber 18
EDR 63.5
Impregnated Paper (oil or gas-filled) 63.5
Impregnated Paper (solid) 38
PE (high and low density) 127
PVC 6
XLPE (filled) 63.5
XLPE (unfilled) 127
## Cable Dielectric Loss
Cable Capacitance
Cable capacitance can be obtained from manufacturers or for circular conductors calculated using the following:
$C= ε 18ln( D i d c ) 10 −9 F. m −1$
Given the tan δ and capacitance of the cable, the dielectric loss is easily calculated:
$W d =ω C U 0 2 tan δ$
It is possible to use the above for other conductor shapes if the geometric mean is substituted for Di and dc.
## Symbols
dc - diameter of conductor, mm
Di - external diameter of insulation, mm
C - cable capacitance per unit length, F.m-1
U0 - cable rated voltage to earth, V
Wd - dielectric loss per unit length, W.m-1
tan δ - loss factor for insulation
ε - insulation relative permitivity
ω - angular frequency (2πf)
More interesting Notes:
Steven has over twenty five years experience working on some of the largest construction projects. He has a deep technical understanding of electrical engineering and is keen to share this knowledge. About the author
myElectrical Engineering
Why a Sine Wave?
I received this question by email a few weeks. First thoughts was that it is a product of the mathematics of rotating a straight conductor in a magnetic...
How Electrical Circuits Work
If you have no idea how electrical circuits work, or what people mean then they talk about volts and amps, hopefully I can shed a bit light. I’m intending...
Electric Motors
Collection of links to various places with useful motor information. I’ll try and return to the page every now and again to update it with any motor notes...
Photovoltaic (PV) - Utility Power Grid Interface
Photovoltaic (PV) systems are typically more efficient when connected in parallel with a main power gird. During periods when the PV system generates energy...
Skin Tapping Input
Tapping your forearm or hand with a finger could soon be the way you interact with gadgets. A new technology created by Microsoft and Carnegie Mellon ...
What is an Open Delta Transformer
In three phase systems, the use of transformers with three windings (or legs) per side is common. These three windings are often connected in delta or...
Cold Fusion (or not?)
Recently I have seen a few interesting articles on viable cold fusion; the combining of atoms at room like temperatures to create boundless energy. Now...
IEEE Winds of Change
IEEE TV has a part series of videos on wind power and it's implication. For a really good overview to the technologies and issues around wind power, these...
Questions - Reputation and Privilege
Our question and answer system while letting you do exactly what it says, is much more. It is a dynamic user driven system, where our users not only ask...
How to Size Power Cable Duct
Some colleagues had an issue earlier in the week on sizing conduits to be cast in concrete for some power cables . It became clear that none of us had...
## Have some knowledge to share
If you have some expert knowledge or experience, why not consider sharing this with our community.
By writing an electrical note, you will be educating our users and at the same time promoting your expertise within the engineering community.
To get started and understand our policy, you can read our How to Write an Electrical Note
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7040961980819702, "perplexity": 2351.351443507809}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178376467.86/warc/CC-MAIN-20210307105633-20210307135633-00567.warc.gz"}
|
https://cartesianproduct.wordpress.com/2012/07/15/more-on-error-correction/
|
# More on error correction
Considering the chances of a decoding error (ie having more errors than our error correction code can handle)…
$P(no errors) = (1 -p)^n$ where p is the probability of a bit flip and n the length of the code.
So in our case that gives $P(no errors) = (0.9)^4 = 0.6561$
But we can also work out the possibility of k bit flips, using the binomial distribution:
$P(k_{errors}) = _nC_k p^k (1-p)^{n-k}$
So what are the prospects of a decoding error. This is the lower bound (only the lower bound because – as the table in the previous post showed – some errors might be detected and some not for a given Hamming distance):
$P(total errors)$ = $\sum_{k = d}^n$ $_nC_kp^k(1-p)^{n-k}$
For us $d=4, n=4$, so therefore as $0! = 1$, the lower bound in our case is $0.1^4 (0.9)^{4 - 4} = 0.0001$ which isn’t bad even for such a noisy channel.
But what is the guaranteed success rate?
Here we are looking at:
$\sum_{k=0}^{\lfloor\frac{d - 1}{2}\rfloor}$ $_nC_k p^k (1 - p)^{n - k}$
(Recalling $d \geq 2v +1$ for v bits of error correction)
In our case this gives:
$_4C_0 p^0 (1 -p)^4 + _4C_1 p^1 (1 -p)^3$ $= 0.6561 + 0.2916 = 0.9477$
This shows the power of error correction – even though there is a 10% chance of an individual bit flipping, we can actually keep the error down to just over 5%.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 14, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8726085424423218, "perplexity": 421.0703857078309}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541529.0/warc/CC-MAIN-20161202170901-00253-ip-10-31-129-80.ec2.internal.warc.gz"}
|
https://www.flyingcoloursmaths.co.uk/portfolio-post/quotable-maths-abdur-rauf/
|
# Quotable maths: Abdur-Rauf
The arithmetic of life does not always have a logical answer.
-Inshirah Abdur-Rauf
## Colin
Colin is a Weymouth maths tutor, author of several Maths For Dummies books and A-level maths guides. He started Flying Colours Maths in 2008. He lives with an espresso pot and nothing to prove.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8924073576927185, "perplexity": 9409.201772334747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986647517.11/warc/CC-MAIN-20191013195541-20191013222541-00069.warc.gz"}
|
https://physics.stackexchange.com/questions/69828/equivalence-theorem-of-the-s-matrix?noredirect=1&lq=1
|
# Equivalence Theorem of the S-Matrix
as far as I know the equivalence theorem states, that the S-matrix is invariant under reparametrization of the field, so to say if I have an action $S(\phi)$ the canonical change of variable $\phi \to \phi+F(\phi)$ leaves the S-matrix invariant.
In Itzykson's book there is now an exercise in which you have to show, that the generating functional $$Z^\prime(j)=\int \mathcal{D}[\phi] \exp{\{iS(\phi)+i\int d^4x\hspace{0.2cm} j(x)(\phi +F(\phi)\}}$$ gives the same S-matrix as the ordinary generating functional with only $\phi$ coupled to the current due to the vanishing contact terms. He then writes, that this proves the equivalence theorem, which I do not fully understand.
Suppose I take this canonical change of variable, then I get a new action $S^\prime(\phi)=S(\phi+F(\phi))$ and a generating functional $$Z(j)=\int \mathcal{D}[\phi] \exp\{iS(\phi+F(\phi))+i\int d^4x\hspace{0.2cm} j(x)\phi\}$$ If I now "substitute" $\phi+F(\phi)=\chi$ I get $$Z(j)=\int \mathcal{D}[\chi] \det\left(\frac{\partial \phi}{\partial \chi}\right) \exp\{iS(\chi)+i\int d^4x\hspace{0.2cm} j(x)\phi(\chi)\}$$ with $\phi(\chi)=\chi + G(\chi)$ the inverse of $\chi(\phi)$. Therefore comparing $Z(j)$ and $Z^\prime(j)$ I get an extra jacobian determinant.
Where is my fallacy, or why should the determinant be 1?
• To clarify, the function $F(\phi)(x)$ is a function only of $\phi(x)$, or is it allowed to depend on local derivatives, or is it a general smooth functional? Jul 2 '13 at 17:54
• Echoing @BebopButUnsteady's comment, what is your definition of a "canonical change of variables" in a Lagrangian theory? Jul 2 '13 at 18:06
• I mean a change of variables, which is invertible and has therefore a nonvanishing jacobi determinant. It should further be of the kind $x \to x + F(x)$ so to say a point transformation. The function $F(\phi)$ should only depend on $\phi(x)$ here. Jul 2 '13 at 18:18
• There is a discussion in Zee (page 68, Appendix 2 : Field Redefinition) Jul 3 '13 at 8:49
• @gaugi: I agree with you that this at least somewhat more subtle than the texts imply. I will try to write an (incomplete) answer summarizing what I've figured out. Jul 3 '13 at 14:55
First, the equivalence theorem refers to S-matrix elements rather than off-shell n-point functions, or their generator $Z[j]$, which are generally different. What you have to study is the LSZ formula that gives the relation between S-matrix elements and expectation values of time-ordered product of fields (off-shell n-point functions, what one gets after taking derivatives of $Z[j]$ and setting $j=0$). You will see that even thought these time-ordered products are different, the S-matrix elements are equal just because the residues of these products in the relevant poles are "equal" (they are strictly equal if the matrix elements of the fields between vacuum and one-particle states ( $\langle p|\phi|0\rangle$) are equal, if they are not equal, but both of them are different from zero, one can trivially adapt the LSZ formula to give the same results).
Second, the generating functional
$$Z[j]=\int \mathcal{D}[\phi] \exp{\{iS(\phi)+i\int d^4x\hspace{0.2cm} j(x)\phi(x) \}}$$
is not valid for all actions functionals $S$. I will illustrate this with a quantum-mechanical example—the generalization to quantum field theory is trivial. The key point is to notice that the "fundamental" path integral is the phase-space or Hamiltonian path integral, that is, the path integral before integrating out momenta.
Suppose an action $S[q]=\int L (q, \dot q) \, dt=\int {\dot q^2\over 2}-V(q)\, dt$, then the generating of n-point functions is:
$$Z[j]\sim\int \mathcal{D}[q] \exp{\{iS(q)+i\int dt\hspace{0.2cm} j(t)q(t) \}}$$
The Hamiltonian that is connected with the action above is $H(p,q)={p^2\over 2}+V(q)$ and the phase-space path integral is: $$Z[j]\sim \int \mathcal{D}[q]\mathcal{D}[ p] \exp{\{i\int p\dot q - H(p,q)\;dt+i\int dt\hspace{0.2cm} j(t)q(t) \}}$$ Now, if one performs a change of coordinates $q=x+G(x)$ in the Lagrangian: $$\tilde L(x,\dot x)=L(x+G(x), \dot x(1+G(x)))={1\over 2}\dot x^2 (1+G'(x))^2-V(x+G(x))$$ the Hamiltonian is: $$\tilde H={\tilde p^2\over 2(1+G'(x))}+V\left( x+G(x)\right)$$ where the momentum is $\tilde p={d\tilde L\over d\dot x }=\dot x \; (1+G'(x))^2$. A change of coordinates implies a change in the canonical momentum and the Hamiltonian. And now the phase-space path integral is: $$W[j]\sim \int \mathcal{D}[x]\mathcal{D}[\tilde p] \exp{\{i\int \tilde p\dot x - \tilde H(\tilde p,x)\;dt+i\int dt\hspace{0.2cm} j(t)x(t) \}}\,,$$ as you were probably expecting. However, when one integrates the momentum, one obtains the Langrangian version of the path integral: $$W[j]\sim\int \mathcal{D}[x]\;(1+G'(x)) \exp{\{iS[x+G(x)]+i\int dt\hspace{0.2cm} j(t)x(t) \}}$$ where $(1+G'(x))$ is just $\det {dq\over dx}$. Thus, your second equation is wrong (if one assumes that the starting kinetic term is the standard one) since the previous determinant is missing. This determinant cancels the determinant in your last equation. Nonetheless, $Z[j]\neq W[j]$, since changing the integration variable in the first equation of this answer $$Z[j]\sim\int \mathcal{D}[x]\;(1+G'(x)) \exp{\{iS[x+G(x)]+i\int dt\hspace{0.2cm} j(t)(x(t)+G(x)) \}}$$ which does not agree with $W[j]$ due to the term $j(t)(x(t)+G(x))$. So that, both generating functional of n-point functions are different (but the difference is not the Jacobian), although they give the same S-matrix elements as I wrote in the first paragraph.
Edit: I will clarify the questions in the comments
Let $I=S(\phi)$ be the action functional in Lagrangian form and let's assume that the Lagrangian generating functional is given by $$Z[j]=\int \mathcal{D}[\phi] \exp{\{iS(\phi)+i\int d^4x\hspace{0.2cm} j\phi \}}$$
Obviously, we may change the integration variable $\phi$ without changing the integral. So that, if $\phi\equiv \chi + G(\chi)$, one obtains:
$$Z[j]=\int \mathcal{D}[\chi]\,\det(1+G'(\chi)) \exp{\{iS(\chi +G(\chi))+i\int d^4x\hspace{0.2cm} j(\chi + G(\chi))\}}$$
If we want to use this generating functional in terms of the field variable $\chi$, the determinant is crucial. If we had started with the action $S'(\chi)=S(\chi +G(\chi))=I$ — without knowing the existence of the field variable $\phi$ —, we would had derived the following Lagrangian version of the generating functional: $$Z'[j]=\int \mathcal{D}[\chi]\,\det(1+G'(\chi)) \exp{\{iS'(\chi )+i\int d^4x\hspace{0.2cm}j \chi\}}$$ Note that $Z'[j]\neq Z[j]$ (but $Z[j=0]=Z'[j=0]$) and therefore the off-shell n-point functions are different. If we want to see if these generating functional give rise the same S-matrix elements, we can, as always, perform a change of integration variable without changing the functional integral. Let's make the inverse change, that is, $\chi\equiv\phi+F(\phi)$: $$Z'[j]=\int \mathcal{D}[\phi]\, \det(1+F'(\phi)) \det(1+G'(\chi)) \exp{\{iS'(\phi+F(\phi) )+i\int d^4x\hspace{0.2cm} j(\phi + F(\phi))\}}=\int \mathcal{D}[\phi]\, \exp{\{iS(\phi)+i\int d^4x\hspace{0.2cm} j(\phi + F(\phi))\}}$$
So that, one has to introduce the n-point functions connected with $Z[j]$ and $Z'[j]$ in the LSZ formula and analyze if they give rise to same S-matrix elements, even though they are different n-point functions.
(Related question: Scalar Field Redefinition and Scattering Amplitude)
• My second formula was from the viewpoint, that I have an action $S(\phi)$ for which I do not know, that it is a somehow transformed other action, e.g. for the free case. Then I naively write my generating functional as my second formula, as I only know the action $S(\phi)$. I have transformed nothing there. Why is it wrong then? If I now transform this action I wrote to the free case I get the third formula which coincides with the transformation law you gave, but not with the formula from Itzykson due to the missing determinant. Jul 4 '13 at 12:49
• By the way, I do understand that terms like $j(t)G(x)$ do not contribute in the S-Matrix due to LSZ, as they are contact terms and that I must not compare Green´s functions. Jul 4 '13 at 12:51
• @drake: I believe we are on the same page. I was essentially trying to explain the second paragraph of your answer, which claims that if we were handed the non linear $S$ we would know to right down the det in the measure. I understand this as being the fact that a Lagrangian does not unambiguously define correlators, because contact terms are singular. Only certain prescriptions for these terms will lead to a coherent theory. The det is a manifestation of the fact that our usual prescriptions are not coherent for this lagrangian. Jul 5 '13 at 4:24
• @gaugi: I think the point is that writing simply $W[J] = \int\exp(i\int\mathcal{L} +J\chi)$ gives pathological results when there are derivatives in the interaction. You need to start from the Hamiltonian prescription. So if someone hands you a Lagrangian $\mathcal{L}$ you should get the Hamiltonian, write down the path integral and then integrate out the momenta, which will get you the determinant. Jul 5 '13 at 13:54
• @BebopButUnsteady Thank you! If the Hamiltonian density is $T_{ij}(q)p_ip_j+W_i(q)p_i+V(q)$, then the integral over momenta gives $(\det (T(q)))^{-1/2}$. $T_{ij}$ is often (but not always) a constant and thus it does not have any implication. Jul 5 '13 at 21:41
I) Ref. 1 never mentions explicitly by name the following two ingredients in its proof:
1. The pivotal role of the Lehmann-Symanzik-Zimmermann (LSZ) reduction formula $$\left[ \prod_{i=1}^n \int \! d^4 x_i e^{ip_i\cdot x_i} \right] \left[ \prod_{j=1}^m \int \! d^4 y_j e^{-ik_j\cdot y_i} \right] \langle \Omega | T\left\{ \phi(x_1)\ldots \phi(x_n)\phi(y_1)\ldots \phi(y_m )\right\}|\Omega \rangle$$ $$~\sim~\left[ \prod_{i=1}^n \frac{i\langle \Omega |\phi(0)|\vec{\bf p}_i\rangle }{p_i^2-m^2+i\epsilon}\right] \left[ \prod_{j=1}^m \frac{i\langle \vec{\bf k}_j |\phi(0)|\Omega\rangle }{k_j^2-m^2+i\epsilon}\right] \langle \vec{\bf p}_1 \ldots \vec{\bf p}_n|S|\vec{\bf k}_1 \ldots \vec{\bf k}_m\rangle$$ $$\tag{A} +\text{non-singular terms}$$ $$\text{for each} \quad p_i^0~\to~ E_{\vec{\bf p}_i} , \quad k_j^0~\to~ E_{\vec{\bf k}_j}, \quad i~\in~\{1, \ldots,n\}, \quad j~\in~\{1, \ldots,m\}.$$ [Here we have for simplicity assumed that spacetime is $\mathbb{R}^4$; that interactions take place in a compact spacetime region; that asymptotic states are well-defined; that there is just a single type of scalar bosonic field $\phi$ with physical mass $m$.]
2. That eqs. (9-102), (9-103), (9-104a) and (9-104b) on p.447 are just various versions of the Schwinger-Dyson (SD) equations. [The SD equations can be proved either via integration by part, or equivalently, via an infinitesimal changes in integration variables, in the path integral. The latter method is used in Ref. 1.]
On the middle of p. 447, Ref. 1 refers to a field redefinition $\varphi\to \chi$ as canonical if
[...] the relation $\varphi\to \chi$ may be inverted (as a formal power series).
This is certainly not standard terminology. Also it is a somewhat pointless definition, since any reader would have implicitly assumed without being told that field redefinitions are invertible. Note in particular, that Ref. 1 does not imply a Hamiltonian formulation with the word canonical.
II) The Equivalence Theorem states that the $S$-matrix $\langle \vec{\bf p}_1 \ldots \vec{\bf p}_n|S|\vec{\bf k}_1 \ldots \vec{\bf k}_m\rangle$ [calculated via the LSZ reduction formula (A)] is invariant under local field redefinitions/reparametrizations.
In this answer, we will mainly be interested in displaying the main mechanism behind the Equivalence Theorem at the level of correlation functions (as opposed to carefully tracing the steps of Ref. 1 at the level of the partition function).
In the LSZ formula (A), let us consider an infinitesimal, local field redefinition
$$\tag{B} \phi~ \longrightarrow ~\phi^{\prime}~=~ \phi +\delta \phi$$
without explicit space-time dependences; i.e., the transformation
$$\tag{C} \delta \phi(x)~=~ f\left(\phi(x), \partial\phi(x), \ldots, \partial^N\phi(x)\right)$$
at the spacetime point $x$ depends on the fields (and their spacetime derivatives to a finite order $N$), all evaluated at the same spacetime point $x$. [If $N=0$, the transformation (C) is called ultra-local.]
One may now argue that near the single particle poles, this will only lead to a multiplicative rescaling on both sides of the LSZ formula (A) with the same multiplicative constant, i.e., the $S$-matrix is invariant. This multiplicative rescaling is known as wave function renormalization or as field-strength renormalization in Ref. 3.
III) Finally, let us mention that Vilkovisky devised an approach, where $1$-particle-irreducible (1PI) correlation functions are invariant off-shell under field reparametrizations, cf. Ref. 4.
References:
1. C. Itzykson and J-B. Zuber, QFT, (1985) Section 9.2, p. 447-448.
2. A. Zee, QFT in a Nutshell, 2nd ed. (2010), Chapter 1, Appendix B, p. 68-69. (Hat tip: Trimok.)
3. M.E. Peskin and D.V Schroeder, (1995) An Introduction to QFT, Section 7.2.
4. G.A. Vilkovisky, The Unique Effective Action in QFT, Nucl. Phys. B234 (1984) 125.
• This is an old answer but I had a couple of questions: 1. Aren't field redefinitions involving derivatives non-invertible? 2. Is the assumption that field redefinition be local necessary? Jul 19 '17 at 2:25
• @Qmechanic How to proof your statement "One may now argue that near the single particle poles, this will only lead to a multiplicative rescaling on both sides of the LSZ formula (A) with the same multiplicative constant" ? Oct 10 at 9:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9534968137741089, "perplexity": 366.9290937236878}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587719.64/warc/CC-MAIN-20211025154225-20211025184225-00460.warc.gz"}
|
https://www.physicsforums.com/threads/hohmann-transfer.266141/
|
Hohmann transfer
1. Oct 22, 2008
garyman
A Hohmann transfer orbit is used to send a spacecraft to neptune. However the positions of the other planets were not taken into consideration. The craft approaches Jupiter at an angle of 75 degrees to Jupiter's orbit. Calculate (a) the orbital velocity of Jupiter, (b) the probe's velocity upon reaching Jupiter.
Not really sure how to start this question. Does anyone else have any ideas?
2. Oct 25, 2008
alphysicist
Hi garyman,
I would say the starting point is to calculate the parameters of the elliptical Hohmann orbit using the beginning and ending orbital radii, and using the equations for the velocity impulses needed. However, it's not clear to me what quantities are to be considered as "given". What quantities are you allowed to look up?
Have something to add?
Similar Discussions: Hohmann transfer
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.93502277135849, "perplexity": 680.9390141939799}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541910.39/warc/CC-MAIN-20161202170901-00248-ip-10-31-129-80.ec2.internal.warc.gz"}
|
https://export.arxiv.org/abs/2004.01377?context=stat
|
stat
(what is this?)
# Title: Sequential Learning for Domain Generalization
Abstract: In this paper we propose a sequential learning framework for Domain Generalization (DG), the problem of training a model that is robust to domain shift by design. Various DG approaches have been proposed with different motivating intuitions, but they typically optimize for a single step of domain generalization -- training on one set of domains and generalizing to one other. Our sequential learning is inspired by the idea lifelong learning, where accumulated experience means that learning the $n^{th}$ thing becomes easier than the $1^{st}$ thing. In DG this means encountering a sequence of domains and at each step training to maximise performance on the next domain. The performance at domain $n$ then depends on the previous $n-1$ learning problems. Thus backpropagating through the sequence means optimizing performance not just for the next domain, but all following domains. Training on all such sequences of domains provides dramatically more `practice' for a base DG learner compared to existing approaches, thus improving performance on a true testing domain. This strategy can be instantiated for different base DG algorithms, but we focus on its application to the recently proposed Meta-Learning Domain generalization (MLDG). We show that for MLDG it leads to a simple to implement and fast algorithm that provides consistent performance improvement on a variety of DG benchmarks.
Comments: tech report Subjects: Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG); Machine Learning (stat.ML) Cite as: arXiv:2004.01377 [cs.CV] (or arXiv:2004.01377v1 [cs.CV] for this version)
## Submission history
From: Da Li [view email]
[v1] Fri, 3 Apr 2020 05:10:33 GMT (3063kb,D)
Link back to: arXiv, form interface, contact.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.387845903635025, "perplexity": 1727.4452635748437}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107911792.65/warc/CC-MAIN-20201030212708-20201031002708-00527.warc.gz"}
|
https://ai.stackexchange.com/questions/20542/how-to-convert-something-to-vectors
|
# How to convert something to vectors
I wanted to create an encoder, which is the first part of an autoencoder. I do not want to build the whole autoencoder but rather wanted to test whether my mobile device can support running an encoder and encoding some images on a trained TensorFlow model.
But I am having some problems. I found some code online to describe an autoencoder here. My aim is to use the encoder separately to "compress" images. Here is my code. So know I want to convert the encoded variable to a vector format. Can anyone provide a suitably easy solution to accomplish that?
By compressing Images, I mean I just want their vector representation generated by the encoder which can later be decoded. So basically I want it to print its latent space/the encoded variable. The catch is that it should be in a vector representation. The reason is that right know it's shape is (None, 32) which cannot be used for further processing by TensorFlow. So, any ideas?
• If you look at the code you linked you have an Autoencoder model, which basically take the input encodes it and decodes it. If you want to remove the decoding phase declare a new object of type Decoder and call it with the same input as the Autoencoder. Apr 22 '20 at 11:24
• @razvanc92 I have figured that out (I finally found something on the internet) but now the problem is different, so I am changing the whole question by scratch... Apr 22 '20 at 11:26
• I just wrote a small example if you still need it you can access it here: colab.research.google.com/drive/… Apr 22 '20 at 11:31
• Thanx, but I just found out that I do not need something as complex as an encoder. A simple dense layer should be enough to test the processing capacity of my device. So could you help me with the updated problem? Apr 22 '20 at 11:34
• I've modified my previous link to fit your needs. The ? in the shape comes from batches, you can process multiple images at the same time, but if you know in advance how many you're going to process you can just use the batch_shape instead of the shape parameter when defining a dense layer. Apr 22 '20 at 13:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20607441663742065, "perplexity": 391.70052402644797}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057337.81/warc/CC-MAIN-20210922072047-20210922102047-00365.warc.gz"}
|
https://eprints.soton.ac.uk/268485/
|
The University of Southampton
University of Southampton Institutional Repository
# Design and experimental characterization of a tunable vibration-based electromagnetic micro-generator
Zhu, Dibin, Roberts, Stephen, Tudor, John and Beeby, Steve (2010) Design and experimental characterization of a tunable vibration-based electromagnetic micro-generator. Sensors and Actuators A: Physical, 158 (2), 284-293.
Record type: Article
## Abstract
Vibration-based micro-generators, as an alternative source of energy, have become increasingly significant in the last decade. This paper presents a new tunable electromagnetic vibration-based micro-generator. Frequency tuning is realized by applying an axial tensile force to the micro-generator. The dimensions of the generator, especially the dimensions of the coil and the air gap between magnets, have been optimized to maximize the output voltage and power of the micro-generator. The resonant frequency has been successfully tuned from 67.6 to 98 Hz when various axial tensile forces were applied to the structure. The generator produced a power of 61.6–156.6 µW over the tuning range when excited at vibrations of 0.59 ms-2. The tuning mechanism has little effect on the total damping. When the tuning force applied on the generator becomes larger than the generator’s inertial force, the total damping increases resulting in reduced output power. The resonant frequency increases less than indicated from simulation and approaches that of a straight tensioned cable when the force associated with the tension in the beam becomes much greater than the beam stiffness. The test results agree with the theoretical analysis presented.
Text
published.pdf - Other
Published date: March 2010
Organisations: EEE
## Identifiers
Local EPrints ID: 268485
URI: http://eprints.soton.ac.uk/id/eprint/268485
ISSN: 0924-4247
PURE UUID: e9154797-d30c-41c3-8435-9253015bd89f
ORCID for Dibin Zhu: orcid.org/0000-0003-0517-3974
ORCID for John Tudor: orcid.org/0000-0003-1179-9455
ORCID for Steve Beeby: orcid.org/0000-0002-0800-1759
## Catalogue record
Date deposited: 09 Feb 2010 15:09
## Contributors
Author: Dibin Zhu
Author: Stephen Roberts
Author: John Tudor
Author: Steve Beeby
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8071568012237549, "perplexity": 5219.247186405059}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991904.6/warc/CC-MAIN-20210511060441-20210511090441-00123.warc.gz"}
|
http://whitecraneeducation.com/classrooms/classroom.php?id=7&cid=78&tab=4
|
x2
Definitions and Simplifying
( )
Subtracting Rational Expressions
Fundamentally, the process for subtracting two rational expressions follows the same steps as the process for adding only with subtraction in step four. We've found that students often have challenges with the subtraction process so we've included it as a separate section to give us a chance to look at it in more detail.
The process for adding two rational expressions is almost identical to the process for adding two fractions.
1. Reduce all of the fractions. You don't have to do this but it'll make your life easier later on.
2. Find the least common denominator.
3. Make each rational expression into an equivalent expression with the least common denominator.
4. Make a new fraction by subtracting the numerators and keeping the least common denominator.
5. Reduce the fraction from step four.
Example 1
Simplify $$\frac{x - 1}{x - 3}-\frac{x}{x + 4}$$
Both of those fractions are already reduced so we can go right to finding the least common denominator. In this case, that's going to be $(x - 3)(x + 4)$. To get each fraction with the same denominator we have to multiply the numerator and denominator of the first expression by $x + 4$ and the second one by $x - 3$.
$$\frac{(x + 4)(x - 1)}{(x + 4)(x - 3)}-\frac{x(x - 3)}{(x + 4)(x - 3)}$$ $$\frac{x^2 + 3x - 4}{(x + 4)(x - 3)}-\frac{x^2 - 3x}{(x + 4)(x - 3)}$$ $$\frac{x^2 + 3x - 4 - (x^2 - 3x)}{(x + 4)(x - 3)}$$ $$\frac{x^2 + 3x - 4 - x^2 + 3x}{(x + 4)(x - 3)}$$ $$\frac{6x - 4}{(x + 4)(x - 3)}$$
If you aren't clear on how I got the first numerator on the second line, take a look at our page on subtracting polynomials.
The numerator factors to 2(3x - 2). Since there's neither a 2 nor a $3x - 2$ in the denominator, there's nothing we can cancel here. This makes our final answer:
$$\frac{6x - 4}{x^2 + 4x - 12}$$
Example 2
Simplify $$\frac{x^2 + x - 2}{2x^2 + 5x + 2}-\frac{x^2 - 5x - 6}{2x^2 - 7x - 4}$$
First we need to factor all of the polynomials in both expressions.
$$\frac{(x + 2)(x - 1)}{(x + 2)(2x + 1)}-\frac{(x - 6)(x + 1)}{(2x + 1)(x - 4)}$$
Since the numerator and denominator of the first expression both have a $x + 2$ we can reduce the expression by canceling them both.
$$\frac{x - 1}{2x + 1}-\frac{(x - 6)(x + 1)}{(2x + 1)(x - 4)}$$
Now, looking at the factors in the denominator, the least common denominator is $(2x + 1)(x - 4)$. The second expression already has that as its denominator so all we need to do is multiply the numerator and denominator of the first expression by $x - 4$.
$$\frac{(x - 4)(x - 1)}{(x - 4)(2x + 1)}-\frac{(x - 6)(x + 1)}{(2x + 1)(x - 4)}$$ $$\frac{x^2 - 5x + 4}{(x - 4)(2x + 1)}-\frac{x^2 - 5x - 6}{(2x + 1)(x - 4)}$$ $$\frac{x^2 - 5x + 4 - (x^2 - 5x - 6)}{(x - 4)(2x + 1)}$$ $$\frac{x^2 - 5x + 4 - x^2 + 5x + 6}{(x - 4)(2x + 1)}$$ $$\frac{10}{(x - 4)(2x + 1)}$$
The numerator can't be factored any further and there's nothing that goes into 10 evenly in the denominator so that expression is our final answer.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9635194540023804, "perplexity": 267.3182241120769}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824338.6/warc/CC-MAIN-20181213010653-20181213032153-00619.warc.gz"}
|
https://chemistry.stackexchange.com/questions/54423/why-is-mno2-not-a-peroxide
|
# Why is MnO2 not a peroxide?
As I know, $\ce{MnO2}$ is called manganese(IV) oxide.
Why can't we name it manganese peroxide since manganese(II) exists too?
In other words, how do we know that the oxidation number of manganese in $\ce{MnO2}$ is $4$ and not $2$?
Peroxides contain two oxygens connected by a single bond. X-ray or neutron diffraction will show that the oxygens in $\ce{MnO2}$ are too far apart to be bonded, and therefore it is not a peroxide.
Peroxide is well known as a ligand to transition metals (though whether it is better described as peroxide or superoxide is sometimes not always clear, e.g. in some cobalt species). An example which you might come across in the lab is chromium(VI) oxide peroxide, $\ce{CrO5}$.
The question’s logical premise is skewed. $\ce{CO2}$ is carbon(IV) oxide or carbon dioxide. But carbon(II) also exists and so does carbon(II) oxide (carbon monoxide, $\ce{CO}$). Just because there are twice as many oxygens in a compound does not mean that it can be a peroxide with a lower oxidation state of the central atom.
In fact, peroxides are decidedly rare. There is $\ce{H2O2}$, $\ce{Na2O2}$, $\ce{[Cr(O)(O2)2]}$ (the chromium butterfly), $\ce{mCPBA}$, $\ce{tBuOOX}$, $\ce{(PhCOO)2}$, artemisinin and I’m already at a loss. Compare that to the vast number of oxides out there and you get what I’m getting at. This is because the $\ce{O-O}$ bond in peroxides is very energetic, and they readily oxidise or reduce other compounds for oxygen to achieve its more stable oxidation states $\mathrm{-II}$ or $\pm 0$ (the former typically more common and stable than the latter).
Thus, peroxides can only form if electrons are there to reduce atmospheric oxygen, but not enough to reduce it to an oxide (or water or an alcohol group etc.). And especially in the case of metals with many available oxidation states, it is very unlikely that a peroxide should form for a given one — especially if the metal is not fully oxidised (remember that the highest common oxidation state of manganese is $\mathrm{+VII}$). If a peroxide accidentally did form, it will typically be a good oxidising or reducing agent. But $\ce{MnO2}$ is neither, it is the thermodynamic hole of ionic manganese compounds. Glow a manganese-containing salt in the flame of a bunsen burner, and given enough time you will arrive at $\ce{MnO2}$. These conditions of formation show us that it is very, very, very, very unlikely for $\ce{MnO2}$ to be a peroxide. And it also hardly reacts to any other compounds, further showing that it is most likely not one.
The reason is oxygen bonding in the molecules. Actually peroxide means oxygen - oxygen should be in single bond but in MnO2 oxygens are double bonded with mn there is no bond between oxygens. For example in case of H2O2 it has oxygen - oxygen single bond so it is named as hydrogen peroxide but in MnO2, both the oxygens are bonded with Mn there is no bond between oxygens.
• $\ce{MnO2}$ is an ionic lattice with several different polymorphs. Describing the oxygens as being double bonded to the manganese is not really true. – bon Jul 2 '16 at 17:36
Peroxides are compounds in which the oxidation state of Oxygen atoms is -1. Since you know that MnO2 is Mn(IV) oxide, which means Oxygen atoms are in -2 oxidation state. Hence it is not called a peroxide.
• Circular logic... – orthocresol Jul 2 '16 at 16:10
• Circular reasoning works because circular reasoning works and because circular reasoning works! (CC @ortho ;)) – Jan Jul 2 '16 at 18:09
• Don't answer when you don't know clearly about it but please don't apply circular logic just for the sake of answering something. – user5764 Jul 3 '16 at 3:07
• not so circular, as we have methods to determine oxidation of Mn by spectroscopy – MolbOrg Jul 3 '16 at 3:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7037051916122437, "perplexity": 1648.7880449637096}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540543252.46/warc/CC-MAIN-20191212102302-20191212130302-00347.warc.gz"}
|
https://en.wikibooks.org/wiki/HSC_Extension_1_and_2_Mathematics/2-Unit/Preliminary/Trigonometric_ratios
|
# HSC Extension 1 and 2 Mathematics/2-Unit/Preliminary/Trigonometric ratios
Trigonometric ratios deals with sin, cos and tan, which are the three main trigonometric functions. Here we define them in two equivalent ways, exploring regions where they are positive and negative, and various identities (things which are always true) about them.
## Definitions
Here we introduce two definitions of the trigonometric ratios. We present the right-angle triangle definition first, because it is conceptually easier to understand, and is more useful in the geometrical and physical applications. However, whereas the first definition is only applicable for angles between 0° and 90°, the second definition is more general, being valid for all angles, including those greater than 360° and those less than 0°.
### Right-angled triangle
Diagram of a right-angled triangle, with adjacent, opposite, and hypotenuse labeled.
Consider a right-angled triangle like the one shown here. We choose one of corners (not the right-angle) and name the angle there ${\displaystyle \theta }$. Then, we label the sides, according to whether they are opposite (it doesn't touch the angle), the hypotenuse or the other adjacent side. We then define sin, cos and tan to be functions of ${\displaystyle \theta }$ (pronounced and written 'theta') such that
{\displaystyle {\begin{aligned}\sin(\theta )&={\frac {\mbox{Opposite}}{\mbox{Hypotenuse}}}\\\cos(\theta )&={\frac {\mbox{Adjacent}}{\mbox{Hypotenuse}}}\\\tan(\theta )&={\frac {\sin(\theta )}{\cos(\theta )}}={\frac {\mbox{Opposite}}{\mbox{Adjacent}}}\end{aligned}}}
Note that these are functions of ${\displaystyle \theta }$, not just a constant multiplied by ${\displaystyle \theta }$. Also, note that these are used so commonly that we normally omit the parentheses:
${\displaystyle \sin(\theta )=\sin \theta \,}$
and similarly for cos and tan.
#### Limitations of this definition
Since this is a right-angled triangle, and the angle sum of a triangle is 180°, ${\displaystyle \theta }$ may only range from 0° to 90°. To define sin, cos and tan for other ranges, we look to a better definition, as below.
### Unit circle
${\displaystyle x^{2}+y^{2}=1\;}$
The unit circle, radius: 1, center: (0, 0):
The unit circle is a very good way for defining the trigonometric functions. If you make an angle t with the x-axis and the radius, the sine value of that angle is the y-value of the intercept between the radius and the circle, and the cosine value is the x-value of the intercept between the radius of the circle. So for any angle t, the point on the graph where the radius meets the circle has the coordinates (cost, sint)
This is because the radius can form a right-angled triangle with the x-axis with one corner on the origin, the other corner on a point on the graph either above or below the x-axis and right or left of the y-axis, and the right-angled corner somewhere on the x-axis below or above the other corner.
## Trigonometric ratios of: – θ, 90° – θ, 180° ± θ, 360° ± θ.
The relation sin2θ + cos2θ = 1, and those derived from it, should be known, as well as ratios of – θ, 90° – θ, 180° + θ, 360° + θ in terms of the ratios of q. Once familiarity with the trigonometric ratios of angles of any magnitude is attained, some practice in solving simple equations, of the type likely to occur in later applications, should be discussed.
## The exact ratios.
Ratios for 0°, 30°, 45°, 60°, 90° should be known as exact values. The exercises given on this section of work should emphasize the use of the exact ratios.
## Bearings and angles of elevation.
The compass bearing, measured clockwise from the North and given in standard three-figure notation (e.g. 023°) should be treated, as well as common descriptions such as ‘due East’, ‘South–West’, etc. Angles of elevation and depression should both be defined, and their use illustrated.
## Sine and cosine rules for a triangle. Area of a triangle, given two sides and the included angle.
The formulae
${\displaystyle {\frac {a}{\sin A}}={\frac {b}{\sin B}}={\frac {c}{\sin C}}}$
${\displaystyle a^{2}=b^{2}+c^{2}-2bc\cos A\;}$
should be proved for any triangle. The expression for the area, 1/2bc sinA, should also be proved.
In applications of these formulae, systematic ‘solution of triangles’ is not required. (This is the type of exercise where the sizes of (say) two sides and one angle of a triangle are given and the sizes of all other sides and angles must be found). The applications should be a means of fixing the results in the pupil’s mind, and should be restricted to simple twodimensional problems requiring only the above formulae. Attention must be given to interpreting calculator output where obtuse angles are required.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 10, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9466018676757812, "perplexity": 689.9622500703814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721387.11/warc/CC-MAIN-20161020183841-00242-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://arxiv.org/list/astro-ph/1109?skip=75&show=50
|
# Astrophysics
## Authors and titles for Sep 2011, skipping first 75
[ total of 1343 entries: 1-50 | 26-75 | 76-125 | 126-175 | 176-225 | 226-275 | ... | 1326-1343 ]
[ showing 50 entries per page: fewer | more | all ]
[76]
Title: Schmidt-Kennicutt relations in SPH simulations of disc galaxies with effective thermal feedback from supernovae
Authors: Pierluigi Monaco (1,2), Giuseppe Murante (3,1), Stefano Borgani (1,2,4), Klaus Dolag (5,6) ((1) Physics Dept, Trieste University. (2) INAF Trieste. (3) INAF Torino. (4) INFN Trieste. (5) University Observatory, Munich. (6) MPA, Munich)
Comments: 13 pages, 8 figures, in press on MNRAS. Revised to match published version, reference added
Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO)
[77]
Title: Note on the chemical potential of decoupled matter in the Universe
Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO); Statistical Mechanics (cond-mat.stat-mech); General Relativity and Quantum Cosmology (gr-qc)
[78]
Title: Dimensionless cosmology
Journal-ref: Astrophysics and Space Science 2012
Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO)
[79]
Title: A search for line intensity enhancements in the far-UV spectra of active late-type stars arising from opacity
Comments: 10 Pages, 8 Figures, and 2 Tables; Accepted in A&A
Subjects: Solar and Stellar Astrophysics (astro-ph.SR)
[80]
Title: The influence of Galactic aberration on precession parameters determined from VLBI observations
Authors: Z. M. Malkin
Journal-ref: Astronomy Reports, 2011, Vol. 55, No. 9, 810-815
Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO); Instrumentation and Methods for Astrophysics (astro-ph.IM)
[81]
Title: Further Results from the Galactic O-Star Spectroscopic Survey: Rapidly Rotating Late ON Giants
Authors: Nolan R. Walborn (Space Telescope Science Institute), Jesus Maiz Apellaniz (IAA-CSIC), Alfredo Sota (IAA-CSIC), Emilio J. Alfaro (IAA-CSIC), Nidia I. Morrell (Las Campanas Observatory), Rodolfo H. Barba (Universidad de La Serena, ICATE-CONICET), Julia I. Arias (Universidad de La Serena), Roberto C. Gamen (Instituto de Astrofisica de La Plata-CONICET and Universidad Nacional de La Plata)
Comments: 18 pages, 2 figures, 2 tables; accepted for publication in the November 2011 issue of The Astronomical Journal
Subjects: Solar and Stellar Astrophysics (astro-ph.SR)
[82]
Title: Measurement of separate cosmic-ray electron and positron spectra with the Fermi Large Area Telescope
Comments: 5 figures, 1 table, revtex 4.1, updated to match PRL published version
Journal-ref: PRL 108, 011103 (2012)
Subjects: High Energy Astrophysical Phenomena (astro-ph.HE)
[83]
Title: Neutral Hydrogen Tully Fisher Relation: The case for Newtonian Gravity
Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO); General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Phenomenology (hep-ph)
[84]
Title: Gravitational Waves and the Maximum Spin Frequency of Neutron Stars
Authors: Alessandro Patruno, Brynmor Haskell, Caroline D'Angelo (API, University of Amsterdam)
Comments: 5 pages, 2 figures, Submitted to ApJ Letters
Subjects: High Energy Astrophysical Phenomena (astro-ph.HE)
[85]
Title: Quasars with Anomalous Hβ Profiles I: Demographics
Comments: 12 pages, accepted to PASJ
Subjects: High Energy Astrophysical Phenomena (astro-ph.HE)
[86]
Title: Chameleon Gravity, Electrostatics, and Kinematics in the Outer Galaxy
Journal-ref: JCAP 1112 (2011) 005
Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO); General Relativity and Quantum Cosmology (gr-qc)
[87]
Title: Three-dimensional hydrodynamic simulations of the combustion of a neutron star into a quark star
Comments: 13 pages, 10 figures. Accepted for publication in Phys. Rev. D
Subjects: High Energy Astrophysical Phenomena (astro-ph.HE); High Energy Physics - Phenomenology (hep-ph); Nuclear Theory (nucl-th)
[88]
Title: Stellar population models at high spectral resolution
Comments: 30 pages, 36 figures, Monthly Notices of the Royal Astronomical Society in press
Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO)
[89]
Title: Substructure in the lens HE 0435-1223
Authors: Ross Fadely (Haverford College), Charles R. Keeton (Rutgers University)
Comments: 18 pages, 12 figures, 4 tables. MNRAS accepted
Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO)
[90]
Title: An Enhanced Cosmological Li6 Abundance as a Potential Signature of Residual Dark Matter Annihilations
Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO); High Energy Physics - Phenomenology (hep-ph)
[91]
Title: Validation of Phonon Physics in the CDMS Detector Monte Carlo
Comments: 6 Pages, 5 Figures, Proceedings of Low Temperature Detectors 14 Conference
Subjects: Instrumentation and Methods for Astrophysics (astro-ph.IM)
[92]
Title: Deconstructing the kinetic SZ Power Spectrum
Comments: Version accepted for publication in ApJ
Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO)
[93]
Title: The Initial Mass Function and Disk Frequency of the Rho Ophiuchi Cloud: An Extinction-Limited Sample
Comments: 46 pages, 7 figures, 4 tables
Subjects: Solar and Stellar Astrophysics (astro-ph.SR)
[94]
Title: A gamma-ray signature of energetic sources of cosmic-ray nuclei
Comments: 10 pages, 3 figures; final draft accepted for publication
Journal-ref: Physics Letters B 707, 255 (2012)
Subjects: High Energy Astrophysical Phenomena (astro-ph.HE); High Energy Physics - Phenomenology (hep-ph)
[95]
Title: Detecting the Highest Redshift (z > 8) QSOs in a Wide, Near Infrared Slitless Spectroscopic Survey
Comments: 16 pages, 19 figures, accepted for publication in MNRAS
Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO)
[96]
Title: The Optical and Near-Infrared Transmission Spectrum of the Super-Earth GJ1214b: Further Evidence for a Metal-Rich Atmosphere
Comments: (v2) ApJ in press, no major changes from v1
Subjects: Earth and Planetary Astrophysics (astro-ph.EP)
[97]
Title: Slowly balding black holes
Authors: Maxim Lyutikov (Purdue University), Jonathan C. McKinney (Stanford University)
Subjects: High Energy Astrophysical Phenomena (astro-ph.HE)
[98]
Title: Constraining the near-IR background light from Population-III stars using high redshift gamma-ray sources
Authors: Rudy C. Gilmore
Comments: 13 pages, 7 figures, 1 table. Accepted to MNRAS. Updated to reflected accepted version, 1 figure added, minor edits made
Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO)
[99]
Title: Broadband spectral modelling of bent jets of Active Galactic Nuclei
Comments: Ph.D thesis (University of Mumbai, INDIA) minor change
Subjects: High Energy Astrophysical Phenomena (astro-ph.HE); Cosmology and Nongalactic Astrophysics (astro-ph.CO)
[100]
Title: The host galaxy of the BL Lacertae object 1ES 0647+250 and its imaging redshift
Comments: Astronomy and Astrophysics (Letters), accepted, 5 pages, 3 figures
Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO)
[101]
Title: PKS 1814-637: a powerful radio-loud AGN in a disk galaxy
Comments: Accepted for publication in A&A -- 11 pages, 9 figures
Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO)
[102]
Title: Very Strong Emission-Line Galaxies in the WISP Survey and Implications for High-Redshift Galaxies
Comments: Accepted for publication in the Astrophysical Journal. 15 pages, 13 figures
Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO)
[103]
Title: Atmospheres of Hot Super-Earths
Comments: 14 pages, 4 figures, 1 table, accepted for publication in ApJL
Subjects: Earth and Planetary Astrophysics (astro-ph.EP)
[104]
Title: The Lag-Luminosity Relation in the GRB Source-Frame: An Investigation with Swift BAT Bursts
Comments: 11 pages, 6 figures, 6 table; Accepted for publication in MNRAS
Subjects: High Energy Astrophysical Phenomena (astro-ph.HE)
[105]
Title: Maser Emission toward the Infrared Dark Cloud G359.94+0.17 Seen in Silhouette against the Galactic Center
Comments: PASJ 64, no2 (April 25, 2012 issue) in press
Subjects: Astrophysics of Galaxies (astro-ph.GA)
[106]
Title: Effects of Rotation on Pulsar Radio Profiles
Authors: Dinesh Kumar, R. T. Gangadhara (Indian Institute of Astrophysics (IIA), Bangalore)
Comments: 5 pages, 2 figures, In-House Scientific Meeting, April 18, 2011, this http URL, Proceedings: IIA, Academic Report: 2010-2011, In press
Subjects: High Energy Astrophysical Phenomena (astro-ph.HE)
[107]
Title: Primordial magnetic fields generated by the non-adiabatic fluctuations at pre-recombination era
Comments: 16 pages, 2 figures, minor corrections, references added, to be published in JCAP
Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO); General Relativity and Quantum Cosmology (gr-qc)
[108]
Title: Results from 730 kg days of the CRESST-II Dark Matter Search
Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO)
[109]
Title: Forward modeling of emission in SDO/AIA passbands from dynamic 3D simulations
Comments: 48 pages, 14 figures, accepted to be publish in ApJ
Journal-ref: 2011 ApJ 743 23
Subjects: Solar and Stellar Astrophysics (astro-ph.SR)
[110]
Title: Seismic modelling of the $β\,$Cep star HD\,180642 (V1449\,Aql)
Comments: 10 pages, 2 Tables, 6 Figures. Accepted for publication in Astronomy and Astrophysics
Subjects: Solar and Stellar Astrophysics (astro-ph.SR)
[111]
Title: Follow the BAT: Monitoring Swift BAT FoV for Prompt Optical Emission from Gamma-ray Bursts
Comments: 4 pages, 3 figures. Contributed to the Proceedings of Gamma Ray Bursts 2010 Conference (Nov 1-4, 2010, Annapolis, MD)
Subjects: High Energy Astrophysical Phenomena (astro-ph.HE)
[112]
Title: Super-massive binary black holes and emission lines in active galactic nuclei
Authors: Luka C. Popovic
Comments: The work was presented as an invited talk at special workshop "Spectral lines and super-massive black holes" held on June 10, 2011 as a part of activity within the frame of COST action 0905 "Black holes in a violent universe" and as a part of the 8th Serbian Conference on Spectral Line Shapes in Astrophysics.Sent to New Astronomy Review as a review paper
Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO)
[113]
Title: LTE model atmopsheres MARCS, ATLAS and CO5BOLD
Authors: Piercarlo Bonifacio (1), Elisabetta Caffau (2,1), Hans-Guenter Ludwig (2,1), Matthias Steffen (3,1), ((1) GEPI - Obs. Paris, CNRS, Univ. Paris Diderot, (2) Zentrum Fuer Astronomie de Universtaet Heidelberg, Landessternwarte, (3) Leibniz-Institut fuer Astrophysik Potsdam)
Comments: Invited talk at the IAU Symposium 282, From Interacting Binaries to Exoplanets; Essential modelling Tools, Tatransk\'a Lomnica 18-22 July 2011, Ed. M. RIchards
Subjects: Solar and Stellar Astrophysics (astro-ph.SR); Instrumentation and Methods for Astrophysics (astro-ph.IM)
[114]
Title: Statistics of Bipolar Representation of CMB maps
Journal-ref: Phys. Rev. D, Vol. 85, 043004, 2012
Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO)
[115]
Title: Daily Modulation of the Dark Matter Signal in Crystalline Detectors
Authors: Nassim Bozorgnia
Comments: 8 pages, 9 figures, to appear in the Proceedings of the Meeting of the Division of Particles and Fields of the American Physical Society (DPF 2011), Brown University, Providence, Rhode Island, August 9-13, 2011
Subjects: Instrumentation and Methods for Astrophysics (astro-ph.IM); Cosmology and Nongalactic Astrophysics (astro-ph.CO)
[116]
Title: Numerical simulations of relativistic magnetic reconnection with Galerkin methods
Comments: 4 pages, 2 figures. Proceedings of "Advances in Computational Astrophysics: methods, tools and outcomes" (Cefalu', June 13-17, 2011)
Subjects: High Energy Astrophysical Phenomena (astro-ph.HE); General Relativity and Quantum Cosmology (gr-qc)
[117]
Title: Phase Diagram for Magnetic Reconnection in Heliophysical, Astrophysical and Laboratory Plasmas
Comments: 27 pages, 5 figures, accepted for publication in Physics of Plasmas
Journal-ref: Phys. Plasmas 18, 111207 (2011)
Subjects: Instrumentation and Methods for Astrophysics (astro-ph.IM); Plasma Physics (physics.plasm-ph); Space Physics (physics.space-ph)
[118]
Title: Completing the Massive Star Population: Striking Into the Field
Authors: M. S. Oey, J. B. Lamb (U. Michigan)
Comments: Invited review to appear in Four Decades of Research on Massive Stars, eds. L. Drissen, C. Robert, and N. St-Louis, ASP Conference Series. 8 pages, 2 figures
Subjects: Astrophysics of Galaxies (astro-ph.GA)
[119]
Title: The 2008 outburst of IGR J17473--2721: evidence for a disk corona?
Subjects: High Energy Astrophysical Phenomena (astro-ph.HE); Solar and Stellar Astrophysics (astro-ph.SR)
[120]
Title: The intriguing HI gas in NGC 5253: an infall of a diffuse, low-metallicity HI cloud?
Comments: 19 pages, 12 figures, accepted for publication in MNRAS
Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO)
[121]
Title: Pulsars and Gravitational Waves
Authors: K. J. Lee (PKU), R. X. Xu (PKU), G. J. Qiao (PKU)
Comments: 11 pages, 3 figures; in: Gravitation and Astrophysics (Proceedings of the IX Asia-Pacific International Conference, 29 June - 2 July, 2009, Wuhan), eds. J. Luo, Z. B. Zhou, H. C. Yeh, and J. P. Hsu, World Scientific, p.162-172
Subjects: High Energy Astrophysical Phenomena (astro-ph.HE); General Relativity and Quantum Cosmology (gr-qc)
[122]
Title: External Electromagnetic Fields of a Slowly Rotating Magnetized Star with Gravitomagnetic Charge
Comments: 6 pages, 2 figures, accepted for publication in Astrophysics and Space Science
Subjects: Solar and Stellar Astrophysics (astro-ph.SR); General Relativity and Quantum Cosmology (gr-qc)
[123]
Title: A Specific Case of Generalized Einstein-aether Theories
Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO); General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Phenomenology (hep-ph)
[124]
Title: Non-Equilibrium Ionization State and Two-Temperature Structure in the Bullet Cluster 1E0657-56
Comments: 11 pages, 9 figures. To appear in PASJ
Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO)
[125]
Title: Constraints on magnetic field strength in the remnant SN1006 from its nonthermal images
Comments: 6 pages, 3 figure, to be published in MNRAS
Subjects: High Energy Astrophysical Phenomena (astro-ph.HE)
[ total of 1343 entries: 1-50 | 26-75 | 76-125 | 126-175 | 176-225 | 226-275 | ... | 1326-1343 ]
[ showing 50 entries per page: fewer | more | all ]
Disable MathJax (What is MathJax?)
Links to: arXiv, form interface, find, astro-ph, 2112, contact, help (Access key information)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43318015336990356, "perplexity": 23406.078490606393}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363135.71/warc/CC-MAIN-20211205035505-20211205065505-00200.warc.gz"}
|
https://basic2tech.com/physics-and-measurement/
|
PHYSICS AND MEASUREMENT
The word Physics originates from the Greek word
Physis, which means nature. Physics in raw terms
is the study of everything around us. Physics is
one of the oldest subjects (unknowing) invented by
humanity. Possibly the oldest discipline in Physics
could be astronomy.
The goals of Physics or Physicist is to express
everyday happenings in a concise mathematical
formula. These formulas are then used by other
Physicist and engineers to predict results of their
experiments. For example Isaac Newton (1642 –
1727) found the laws behind the motion of bodies,
we now use these laws to design rockets that
travel to moon and other planets.
Another major thing that Physicists do is to revise
the laws from time to time depending on
experimental results. Isaac Newton found laws of
motion in the 17th century, these laws worked at
normal speeds, but when a object’s speed is
comparable to that of speed of light, these laws
fails. Albert Einstein (1879 – 1955), put forward the
theory of relativity which gives the same result of
Newton’s laws of motion at slow speeds and far
accurate results to speeds that go up to the speed
of light.
Definition: “MEASUREMENT”is the determination of
the size or magnitude of something “Or” The
comparison of unknown quantity with some
standard quantity of the same rates is known as
measurement
Measurement
Measurement is integral part of Physics like any
other scientific subject. Measurement is a integral
part of human race, without it there will be no
trade, no statistics. You can see the philosophy of
measurement in little kids who don’t even know
what math is. Kids try to compare their height,
size of candy, size of dolls and amount of toys
they have. All these happen even before they know
math. Math is built into our brains even before we
start to learn it.Math provides a great way to study
about anything, that’s why we see computers
involved in almost anything because they are good
at math.
Scale
Scales are used to measure. One would know a
simple ruler or tape could be used to measure
small distances, your height and possibly much
more in Physics we do have certain scales for
certain quantities which we would see very shortly.
Length, Mass and Time
The current system of units has three standard
units: The meter, kilogram, and second. These
three units form the mks-system or the metric
system .
A meter is a unit of length, currently defined as the
distance light travels within 1/299782458th of a
second.
A kilogram is a unit of mass. While it was
previously defined as a specific volume of water
(e.g. 1 Liter or a 10cm^3 cube), it’s current
definition is based on a prototype platinum-iridium
cylinder.
A second is a unit of time. Originally defined as
the amount of time the earth needs to make
1/86400 of a rotation, it is now defined as
9192631770 oscillations of a Cesium-133 atom.
Dimensional and Unit Analysis
Dimensional analysis to determine if an equation is
dimensionally correct. When you are presented
with an equation, dimensional analysis is
performed by stripping the numerical components
and leaving only the unit types (such as Length,
Mass, or Time). It may also be used to determine
the type of unit used for an unknown variable. For
example, the force of gravity may appear as the
following:
It gets converted to the following:
and as such, the unit of force involves multiplying
length and mass, and dividing by the square of the
time.
Unit analysis is similar to dimensional analysis,
except that it uses units instead of the basic
dimensions. The same principle applies; the
numbers are removed, and the units are verified to
be equal on both sides of the equation.
Density Formula
The formula for density is Density Formula
d = density m = mass v = volume
Density
Density is the amount of mass per volume. The
quantity of mass per unit volume of a
substance.The density, or more precisely, the
volumetric mass density, of a substance is its
mass per unit volume. The symbol most often
used for density is ρ (the lower case Greek letter
rho). Mathematically, density is defined as mass
divided by volume:[1]
\rho = \frac{m}{V},
where ρ is the density, m is the mass, and V is the
volume. In some cases (for instance, in the United
States oil and gas industry), density is loosely
defined as its weight per unit volume,[2] although
this is scientifically inaccurate – this quantity is
more specifically called specific weight.
For a pure substance the density has the same
numerical value as its mass concentration.
Different materials usually have different densities,
and density may be relevant to buoyancy, purity
and packaging. Osmium and iridium are the
densest known elements at standard conditions for
temperature and pressure but certain chemical
compounds may be denser.
To simplify comparisons of density across different
systems of units, it is sometimes replaced by the
dimensionless quantity “relative density” or
“specific gravity”, i.e. the ratio of the density of the
material to that of a standard material, usually
water. Thus a relative density less than one means
that the substance floats in water.
The density of a material varies with temperature
and pressure. This variation is typically small for
solids and liquids but much greater for gases.
Increasing the pressure on an object decreases the
volume of the object and thus increases its
density. Increasing the temperature of a substance
(with a few exceptions) decreases its density by
increasing its volume. In most materials, heating
the bottom of a fluid results in convection of the
heat from the bottom to the top, due to the
decrease in the density of the heated fluid. This
causes it to rise relative to more dense unheated
material.
The reciprocal of the density of a substance is
occasionally called its specific volume, a term
sometimes used in thermodynamics. Density is an
intensive property in that increasing the amount of
a substance does not increase its density; rather it
increases its mass.
Conversion of Units
How many kilometers are in 20 miles? To find out,
you will have to convert the miles into kilometers.
A conversion factor is a ratio between two
compatible units.
You may also see conversion factors between
weight (e.g. pounds) and mass (e.g. kilograms).
These factors rely on equivalence (e.g. 1 kilogram
is “close enough” to 2.2 pounds) based on
external factors. While that cannot apply in all
situations, these factors may be used in some
limited scopes.
Estimates and Order-of-Magnitude calculation
The order of magnitude gives the approximate idea
of the powers of 10 .Any number in the form a*10b
[ here a multiplied by 10.. And 10raised to the
power b]if a >or = (10)^0.5 the a become 1 and b
is not changed but when a>(10)^0.5 then a is
taken as 10 so power of b increases by 1.
Significant Figures
A significant figure is a digit within a number that
is expected to be accurate. In contrast, a doubtful
figure is a digit that might not be correct.
Significant figures are relevant in measured
numbers, general estimates or rounded numbers.
As a general rule, any non-zero digit shown is a
significant figure. Zeros that appear after the
decimal point and are at the end of the number are
also significant. Zeros at the end of the number but
before the decimal point are not included as
significant figures (although exceptions may
occur.)
In general, an operation performed on two
numbers will result in a new number. This new
number should have the same number of
significant digits as the least accurate number. If
an exact number is used, it should have the same
number of digits as the estimated number. If both
numbers are exact, the new number should be
calculated fully (within reason).
When doing calculations, you should only keep at
most 1 doubtful digit; while it is acceptable to
keep them when using a handheld calculator or
reflect the correct number of significant digits.
Other units
The current metric system also includes the
following units:
An ampere (A) is a measure for electric current.
A kelvin (K) is a measure for temperature.
A mole (mol) is the amount of substance (based
on number of atoms rather than mass.)
A candela (cd) is a measure for luminous
intensity.
The Lumen (lm) is a measure unit for total
amount light visible for the human eye emitted
by a source.
The lux (lx) is a measure unit for luminous flux
per unit area.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8800451755523682, "perplexity": 3318.702714018063}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499801.40/warc/CC-MAIN-20230130034805-20230130064805-00465.warc.gz"}
|
http://blog.computationalcomplexity.org/2005/07/majority-is-stablest.html
|
## Wednesday, July 27, 2005
### Majority is Stablest
Consider the following two voting schemes to elect a single candidate.
1. Majority Vote.
2. A Majority of Majorities (think an electoral college system with states of equal size).
Which of these voting systems are more stable, i.e., less likely to be affected by flipping a small number of votes?
In an upcoming FOCS paper, Elchanan Mossel, Ryan O'Donnell and Krzysztof Oleszkiewicz prove the "Majority is Stablest" conjecture that answers the above question and in fact shows that majority is the most stable function among balanced Boolean functions where each input has low influence. To understand this result we'll need to define the terms in the statement of the theorem.
• Balanced: A Boolean function is balanced if it has the same number of inputs mapping to zero as mapping to one.
• The influence of the ith variable is the expectation over a random input of the variance of setting the ith bit of the input randomly. The conjecture requires the influence of each variable to be bounded by a small constant.
• Stability: The noise stability of f is the expectation of f(x)f(y) where x and y are chosen independently.
The majority is stablest conjecture has applications for approximation via the unique games conjecture.
1. I believe stability is defined with a parameter \epsilon
as E[f(x)f(y)] where for each i,
with probability 1-\epsilon y_i=x_i and otherwise y_i is chosen uniformly.
2. Also, it is an asymptotic theorem--the influences have to go to zero for it to say something interesting.
3. I keep seeing results about one-time voting systems, including instant-runoff voting or approval voting, but how about results where parties get to vote in multiple elections? (For example, real runoff elections.)
4. http://www.econ.boun.edu.tr/papers/pdf/wp-98-01.pdf
A Degree and an Efficiency of Manipulation of Known Social Choice Rules (Fuad Aleskerov, Eldeniz Kurbanov)
This paper compares some 24 voting methods.
fyi...
5. This result is related to some work in the study of voting power in political science; see here for some discussion and here for a discussion of various voting models and their interpretation. A key issue turns out to be modeling the correlations of votes for people who are near each other. (As far as I know, the results in pure math and cs assume independent voters.)
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8751825094223022, "perplexity": 1148.6586922039144}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999677213/warc/CC-MAIN-20140305060757-00070-ip-10-183-142-35.ec2.internal.warc.gz"}
|
http://www.gradesaver.com/one-flew-over-the-cuckoos-nest/q-and-a/was-randall-actually-mentally-ill-and-if-so-what-disorder-do-you-think-he-was-afflicted-by-117567
|
# Was Randall actually mentally ill? And if so, what disorder do you think he was afflicted by?
Was Randall actually mentally ill? And if so, what disorder do you think he was afflicted by?
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9450552463531494, "perplexity": 10197.964756038778}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323864.76/warc/CC-MAIN-20170629033356-20170629053356-00670.warc.gz"}
|
https://www.physicsforums.com/threads/the-bar-is-in-static-equilibrium-w-15n-na-11n.431786/
|
# Homework Help: The bar is in static equilibrium W = 15N, Na = 11N
1. Sep 24, 2010
### indiangeek
1. The problem statement, all variables and given/known data
The bar is in static equilibrium W = 15N, Na = 11N and Nb which is perpendicular to the bar,Nb = 5.66N.What is the magnitude of tension forceT?
2. Relevant equations
3. The attempt at a solution
W - Na - Nb*sin(theta) = 0;
15 - 11 - 5.66*sin(theta) = 0
theta = 45 deg
Nb*cos(theta) = T
T = 5.66*cos(45)
T = 4 N
But the options given are a)17.79N b)25N c)12N d)21N e)0.9 m^2
solve this problem,,,,,,,,,,,,,
#### Attached Files:
• ###### untitled.bmp
File size:
162.8 KB
Views:
134
2. Sep 24, 2010
### PhanthomJay
Re: statics
3. Sep 25, 2010
### pongo38
Re: statics
This is not a well-defined problem because distances have not been given. You have used the law of equilibrium for equating forces in two directions, but you have not used the principle of sum of moments is zero. Wht happens if you try to check your answer wiith T=4? That you cannot do without distances (or at least the ratio of the distances on the rod)
4. Sep 25, 2010
### PhanthomJay
Re: statics
The bar is at a 45 degree angle. Although the distances are not given, W must be located a certain fraction of the bar length from one end, about half way up ,based on the values given.
5. Sep 27, 2010
### pongo38
Re: statics
I would encourage indiangeek to check statics problems using all the equations of equilibrium available. You might be able to determine the exact location of Nb (in relation to W and Na by using the principle of moments. Having done that, the alleged solution T=4 can be checked.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8098949193954468, "perplexity": 2172.427912573879}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158958.72/warc/CC-MAIN-20180923020407-20180923040807-00328.warc.gz"}
|
http://mathhelpforum.com/calculus/212338-equivalence-norms.html
|
## Equivalence of Norms
I'm somewhat stuck with the simple proof of the following:
Let the (Lebesgue-)measure of some $\displaystyle \Omega$ be finite and $\displaystyle 1 \le p \le q \le \infty$.
Then for all $\displaystyle u \in L^q(\Omega)$ it is also true that $\displaystyle u \in L^p(\Omega)$, whereby
$\displaystyle ||u||_p \le \text{meas}(\Omega)^{\frac{1}{p}-\frac{1}{q}}||u||_q$;
for $\displaystyle q=\infty$ set $\displaystyle \frac{1}{q}:=0$.
Proof: If $\displaystyle q=\infty$, then $\displaystyle ||u||^p_p=\int_{\Omega}|u(x)|^p\,dx \le \text{meas}(\Omega) \sup_{\Omega \setminus N}|u|^p$, where N ist a large enough null set.
If $\displaystyle q<\infty$, then the Hölder inequality should help, but I'm somewhat confused by the suitable choice of exponents.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9782485961914062, "perplexity": 366.5674595256455}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511761.78/warc/CC-MAIN-20181018084742-20181018110242-00299.warc.gz"}
|
http://mathhelpforum.com/geometry/63767-width-height-solid-print.html
|
# Width and Height of Solid
• December 7th 2008, 10:09 AM
magentarita
Width and Height of Solid
The length of a rectangular solid is 7. The width of the solid is 2 more than the height. The volume of the solid is 105. Find the width and the height of the solid.
• December 7th 2008, 10:33 AM
Moo
Hello,
Quote:
Originally Posted by magentarita
The length of a rectangular solid is 7. The width of the solid is 2 more than the height. The volume of the solid is 105. Find the width and the height of the solid.
Let l, w, h be respectively the length, the width and the height of the solid.
We know that the volume of such a shape is defined as being : V=h*l*w
We know that w=2h ("the width is 2 more than the height)
So $105=7*2h*h=14h^2$
Hence $h^2=\frac{105}{14}=\frac{15}{2}$
this is the height. Multiply by 2 to get the width
• December 9th 2008, 09:51 PM
magentarita
ok....
Quote:
Originally Posted by Moo
Hello,
Let l, w, h be respectively the length, the width and the height of the solid.
We know that the volume of such a shape is defined as being : V=h*l*w
We know that w=2h ("the width is 2 more than the height)
So $105=7*2h*h=14h^2$
Hence $h^2=\frac{105}{14}=\frac{15}{2}$
this is the height. Multiply by 2 to get the width
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9512919783592224, "perplexity": 524.5824481673943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164848402/warc/CC-MAIN-20131204134728-00075-ip-10-33-133-15.ec2.internal.warc.gz"}
|
https://physics.stackexchange.com/questions/179424/why-does-the-mass-of-an-object-on-a-frictionless-surface-matter
|
# Why does the mass of an object on a frictionless surface matter?
I'm given a physics problem about a climber hanging over a cliff and attached by rope to a rock on level, frictionless ice. The goal is to find the acceleration of the pair. After working out the net forces, the books says that for the rock,
$T = m_ra$, where $T$ is the tension in the rope, since the sum of the forces is equal to mass times acceleration and the net force on the rock is just the tension force.
and for the climber,
$-T + m_cg = m_ca$
So, combining the two, the book gives:
$a = \frac{m_cg}{m_c + m_r}$
I can make sense of the equations, but I don't understand why they work. If there is no friction, then I would think the rock would just move effortlessly across the ice and not detract from the climber's downward acceleration. At the same time, if the climber were in freefall, that would mean the rope tension is 0, which wouldn't be possible if the climber is attached to the rock.
I feel like I'm missing something obvious here..
edit: the picture in the book is similar to:
• The rock is frictionless but still has inertia. Is that what you are asking? – CuriousOne May 1 '15 at 6:36
• @CuriousOne, Ooh right, inertia. There's the obvious thing I was missing. Thanks! – mowwwalker May 1 '15 at 6:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.744405210018158, "perplexity": 363.4885763193031}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987795403.76/warc/CC-MAIN-20191022004128-20191022031628-00037.warc.gz"}
|
http://www.ida.liu.se/conferences/mucocos2013/authorinfo.shtml
|
Author information MuCoCoS-2013 6th Int. Workshop on Multi-/Many-core Computing Systems
# 6th International Workshop on Multi-/Many-core Computing Systems (MuCoCoS-2013)
### Author Information
Authors of accepted (including conditionally accepted) papers please follow the following guidelines for the preparation of camera-ready papers for both pre-workshop (USB) and post-workshop (to be submitted to IEEE Xplore) proceedings of MuCoCoS-2013.
#### Author/presenter registration
At least one author of each accepted paper needs to register early for MuCoCoS-2013 and present the paper at the workshop. We reserve the right to exclude a paper from publication in the post-workshop proceedings otherwise.
#### IEEE Copyright form
Please fill, sign, and scan the IEEE copyright form for your paper, and send it to christoph.kessler (at) liu.se with cc to sabri.pllana (at) lnu.se by 7 july 2013.
Without the copyright form your paper cannot be published.
The copyright form is required both for pre- and post-workshop proceedings.
On the copyright form, use the following name for the IEEE publication title (conference):
2013 IEEE 6th International Workshop on Multi-/Many-core Computing Systems (MuCoCoS)
#### Preparation of camera-ready papers
Prepare the final version of your paper, taking the reviewer comments carefully into account, and format it with a maximum of 10 pages, using the IEEEtran.sty template (with the conference option).
• For papers in which all authors are employed by the US government, the copyright notice is:
U.S. Government work not protected by U.S. copyright
• For papers in which all authors are employed by a Crown government (UK, Canada, and Australia), the copyright notice is:
978-1-4799-1010-6/13/$31.00 ©2013 Crown • For all other papers the copyright notice is: 978-1-4799-1010-6/13/$31.00 ©2013 IEEE
LaTeX users please add the following lines just before \begin{document} for the copyright notice to show up (shown below as an example for the third case above):
\IEEEoverridecommandlockouts
\hspace{\columnsep}\makebox[\columnwidth]{ }}
MS-Word users can use: "Insert" -> "Text box", insert the appropriate copyright notice in the textbox and place the box (without border) at the bottom left on the first page.
#### PDF eXpress-PLUS information
We use PDF validation via IEEE PDF-eXpress-PLUS to guarantee IEEE Xplore (R) compatible PDF files.
1. Access the IEEE PDF eXpress Plus site.
2. For each conference paper, click "Create New Title".
3. Enter identifying text for the paper (title is recommended but not required).
4. Click "Submit PDF for Checking" or "Submit Source Files for Conversion".
5. Indicate platform, source file type (if applicable), click Browse and navigate to file, and click "Upload File". You will receive online and email confirmation of successful upload.
6. You will receive an email with your Checked PDF or IEEE PDF eXpress Plus-converted PDF attached. If you submitted a PDF for Checking, the email will show if your file passed or failed.
7. For the pre-workshop (USB) proceedings, send your IEEE PDF eXpress Plus-converted PDF to christoph.kessler (at) liu.se with cc to sabri.pllana (at) lnu.se by 7 July 2013.
8. For the post-workshop (Xplore) proceedings, you may update your file on the PDF eXpress Plus site by 14 september 2013 (i.e., 1 week after the workshop).
Note that IEEE may not include papers in IEEE Xplore whose PDF format is not IEEE Xplore compatible.
Hint for Microsoft Word users: The standard settings for the built-in PDF creation in Word 2007/2010 are not Xplore-compatible (in particular, not all fonts are embedded). For further information see e.g. IEEE's PDF specification v4.12 (May 2013), Sections 2 and 6.
Page responsible: Christoph Kessler
Last updated: 2013-10-30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5555327534675598, "perplexity": 10184.663491137759}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644062782.10/warc/CC-MAIN-20150827025422-00062-ip-10-171-96-226.ec2.internal.warc.gz"}
|
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=50&t=57449&p=215846
|
## ICE BOX
205150314
Posts: 106
Joined: Wed Feb 20, 2019 12:16 am
### ICE BOX
when do we know that X is too small to be making a difference for the denominator in the ICE box?
Hannah Pham
Posts: 104
Joined: Fri Aug 09, 2019 12:17 am
### Re: ICE BOX
You would X is too small when K < 10^3
Charysa Santos 4G
Posts: 107
Joined: Wed Sep 18, 2019 12:21 am
Been upvoted: 1 time
### Re: ICE BOX
If your K is less than 10^-3, you can approximate the denominator.
Anna Heckler 2C
Posts: 102
Joined: Wed Sep 18, 2019 12:18 am
### Re: ICE BOX
If X is smaller than 10^-3, then the affect of subtracting this value from the denominator is negligible.
105311039
Posts: 107
Joined: Fri Aug 02, 2019 12:16 am
### Re: ICE BOX
when your K value is less than 10^-3 you can assume that it is too small to affect your concentration.
Gabriel Ordonez 2K
Posts: 113
Joined: Sat Jul 20, 2019 12:15 am
### Re: ICE BOX
If the Kc, Ka, or Kb is smaller than 10^-3, then you can assume to remove the -x because it is small relative to the other number.
Jesse Anderson-Ramirez 3I
Posts: 54
Joined: Thu Sep 26, 2019 12:18 am
### Re: ICE BOX
If any of the K values are smaller than 10^-3 then they are considered negligible.
Ashley Nguyen 2L
Posts: 103
Joined: Sat Aug 17, 2019 12:18 am
Been upvoted: 1 time
### Re: ICE BOX
If the x is < 10^-3, then you can assume that the x has no effect on the denominator.
Angus Wu_4G
Posts: 102
Joined: Fri Aug 02, 2019 12:15 am
### Re: ICE BOX
You can use the approximation if the K value is less than 1X10^-3. After you obtain your final H30+ concentration, you should also make sure the final H30+ concentration is less than 5% of the initial concentration of acid. If it is less than 5%, your approximation is probably fine. If your final concentration is greater than 5% of the initial acid, then your answer could be potentially off, depending on how many significant figures the problem has.
vanessas0123
Posts: 100
Joined: Wed Sep 11, 2019 12:17 am
### Re: ICE BOX
x is too small if it's < 10^-3.
Kishan Shah 2G
Posts: 132
Joined: Thu Jul 11, 2019 12:15 am
### Re: ICE BOX
yes, the cutoff is 10^-3.
However, I would advise only to use the short cut when the value is 10^-5 since you don't want to take a chance and have your answer be off by a few digits. On the test I would always check your answer using the quadratic formula if you used the short cut.
005384106
Posts: 101
Joined: Sat Aug 24, 2019 12:16 am
### Re: ICE BOX
If K is 10^-3 then do you assume that x can be represented as 0.
Jialun Chen 4F
Posts: 108
Joined: Sat Sep 07, 2019 12:16 am
### Re: ICE BOX
The cutoff is 10^-3. Yet in cases of weak acids and bases, one may want to check the % ionization to ensure the approximation is valid (i.e, the quadratic formula is still necessary if the final percentage is >5%).
Juana Abana 1G
Posts: 100
Joined: Wed Sep 18, 2019 12:15 am
### Re: ICE BOX
When the K value is less than 10^-3 then you can assume that the K value it is too small to affect the concentration.
Areli C 1L
Posts: 95
Joined: Wed Nov 14, 2018 12:19 am
### Re: ICE BOX
I would agree that 10^-3 would be the cut-off, but that margin is so close. 10^-4 or 10^-5 would be a safer option. If you do decide to neglect x at 10^-3 always check at the end if your answer is withing 5%.
Matthew ILG 1L
Posts: 112
Joined: Sat Aug 17, 2019 12:15 am
### Re: ICE BOX
I believe that Dr. Lavelle said we can ignore x when K<10^-3.
zfinn
Posts: 106
Joined: Fri Aug 30, 2019 12:16 am
### Re: ICE BOX
when x is <5% of the initial concentration you can omit it
Posts: 104
Joined: Fri Aug 09, 2019 12:16 am
### Re: ICE BOX
if the k is smaller than 10^3, then the x in the ice box can be disregarded in the denominator.
J Medina 2I
Posts: 102
Joined: Wed Sep 25, 2019 12:17 am
### Re: ICE BOX
If you exclude the x and want to double check afterwards to see if the approximation is accurate, then you can calculate the protonation percentage, or deprotonation percentage depending on the question, and if it is below 5% then the approximation is correct.
Owen-Koetters-4I
Posts: 50
Joined: Fri Sep 28, 2018 12:16 am
### Re: ICE BOX
x is small when its less than 10^-3 of initial concentration of acid or base
KHowe_1D
Posts: 103
Joined: Thu Jul 25, 2019 12:15 am
Been upvoted: 1 time
### Re: ICE BOX
X is too small when K<10^-3
AronCainBayot2K
Posts: 101
Joined: Fri Aug 30, 2019 12:17 am
### Re: ICE BOX
If the number (x) is less than 10^-3
Catherine Daye 1L
Posts: 104
Joined: Wed Sep 11, 2019 12:16 am
### Re: ICE BOX
When it’s smaller than 10^-3
Sean Tran 2K
Posts: 65
Joined: Sat Aug 17, 2019 12:17 am
### Re: ICE BOX
If X is less than 10^-3
Maria Poblete 2C
Posts: 102
Joined: Wed Sep 18, 2019 12:15 am
### Re: ICE BOX
If x is less than 10-3, it can be considered negligible.
Nathan Rothschild_2D
Posts: 131
Joined: Fri Aug 02, 2019 12:15 am
### Re: ICE BOX
You also should check again at the end with the 5% dissociation rule. Divide your x by the initial and if it is greater than 5%, then you can't assume it's negligible.
Jacob Villar 2C
Posts: 105
Joined: Sat Aug 17, 2019 12:18 am
### Re: ICE BOX
When K is less than 10^-3, and you'll notice it is able to be approximated if x/initial concentration is less than 5%
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9516459703445435, "perplexity": 5557.898444621657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178381803.98/warc/CC-MAIN-20210308021603-20210308051603-00260.warc.gz"}
|
https://infoscience.epfl.ch/record/149757
|
Infoscience
Journal article
# An experimental investigation of laminar-turbulent transition in complex fluids
An experimental study of laminar-turbulent transition in complex diluted fruit purees flows in circular ducts, is presented in this work. Data measured using a rectilinear pipe viscosimeter are analyzed to single out useful critical values of both the wall shear stress and the flow rate at which transition-to-turbulence occurs for a given dilution degree. A comparison between the Dodge-Metzner-Reed methods and the classical Mishra & Tripathi and Hanks correlations to estimate the critical generalized Reynolds number, is also discussed. The emerging discrepancies can reasonably be attributed to viscoelastic effects, which probably become important near the point where transition-to-turbulence occurs. This analysis therefore has important practical implications for the prevention of the breakdown of the structure due to turbulent mechanical stresses [J. Food Engng. 52 (2002) 397].
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9490177631378174, "perplexity": 2023.1027097034241}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917127681.84/warc/CC-MAIN-20170423031207-00404-ip-10-145-167-34.ec2.internal.warc.gz"}
|
https://pure.uai.cl/en/publications/measurement-of-quarkonium-production-in-protonlead-and-protonprot
|
# Measurement of quarkonium production in proton–lead and proton–proton collisions at 5.02TeV with the ATLAS detector
ATLAS Collaboration
Research output: Contribution to journalArticlepeer-review
77 Scopus citations
## Abstract
The modification of the production of J/ ψ, ψ(2 S) , and Υ(nS) (n= 1 , 2 , 3) in p+Pb collisions with respect to their production in pp collisions has been studied. The p+Pb and pp datasets used in this paper correspond to integrated luminosities of 28nb-1 and 25pb-1 respectively, collected in 2013 and 2015 by the ATLAS detector at the LHC, both at a centre-of-mass energy per nucleon pair of 5.02 TeV. The quarkonium states are reconstructed in the dimuon decay channel. The yields of J/ ψ and ψ(2 S) are separated into prompt and non-prompt sources. The measured quarkonium differential cross sections are presented as a function of rapidity and transverse momentum, as is the nuclear modification factor, Rp Pb for J/ ψ and Υ(nS). No significant modification of the J/ ψ production is observed while Υ(nS) production is found to be suppressed at low transverse momentum in p+Pb collisions relative to pp collisions. The production of excited charmonium and bottomonium states is found to be suppressed relative to that of the ground states in central p+Pb collisions.
Original language English 171 European Physical Journal C 78 3 https://doi.org/10.1140/epjc/s10052-018-5624-4 Published - 1 Mar 2018
## Fingerprint
Dive into the research topics of 'Measurement of quarkonium production in proton–lead and proton–proton collisions at 5.02TeV with the ATLAS detector'. Together they form a unique fingerprint.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9980831146240234, "perplexity": 4616.956327183187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710771.39/warc/CC-MAIN-20221130192708-20221130222708-00317.warc.gz"}
|
https://www.scielo.br/j/rsp/a/ZRHjZ9YnFJS3sSnpKwjmndM/?lang=en
|
# ABSTRACT
## OBJECTIVE
To evaluate the sampling plan of the Health Survey of the City of São Paulo (ISA-Capital 2015) regarding the accuracy of estimates and the conformation of domains of study by the Health Coordinations of the city of São Paulo, Brazil.
## METHODS
We have described the population, domains of study, and sampling procedures, including stratification, calculation of sample size, and random selection of sample units, of the Health Survey of the City of São Paulo, 2015. The estimates of proportions were analyzed in relation to precision using the coefficient of variation and the design effect. We considered suitable the coefficients below 30% at the regional level and 20% at the city level and the estimates of the design effect below 1.5. We considered suitable the strategy of establishing the Health Coordinations as domains after verifying that, within the coordinations, the estimates of proportions for the age and sex groups had the minimum acceptable precision. The estimated parameters were related to the subjects of use of services, morbidity, and self-assessment of health.
## RESULTS
A total of 150 census tracts were randomly selected, 30 in each Health Coordination, 5,469 households were randomly selected and visited, and 4,043 interviews were conducted. Of the 115 estimates made for the domains of study, 97.4% presented coefficients of variation below 30%, and 82.6% were below 20%. Of the 24 estimates made for the total of the city, 23 presented coefficient of variation below 20%. More than two-thirds of the estimates of the design effect were below 1.5, which was estimated in the sample size calculation, and the design effect was below 2.0 for 88%.
## CONCLUSIONS
The ISA-Capital 2015 sample generated estimates at the predicted levels of precision at both the city and regional levels. The decision to establish the regional health coordinations of the city of São Paulo as domains of study was adequate.
DESCRIPTORS
Health Surveys, methods; Stratified Sampling; Cluster Sampling; Sample Size; Data Collection; Statistical Analysis
# RESUMO
## OBJETIVO
Avaliar o plano de amostragem do Inquérito de Saúde do Município de São Paulo (ISA-Capital 2015) em relação à precisão das estimativas e à conformação de domínios de estudo pelas coordenadorias de saúde do município de São Paulo.
## MÉTODOS
Foram sorteados 150 setores censitários, 30 em cada Coordenadoria de Saúde, sorteados e visitados 5.469 domicílios ocupados, e realizadas 4.043 entrevistas. Das 115 estimativas feitas para os domínios de estudo, 97,4% apresentaram coeficientes de variação menores do que 30% e 82,6% menores do que 20%. Das 24 estimativas feitas para o total do município, 23 apresentaram coeficiente de variação menor do que 20%. Mais de dois terços das estimativas do efeito do delineamento foram inferiores a 1,5, valor previsto no cálculo do tamanho da amostra, e o efeito do delineamento foi menor do que dois para 88%.
## CONCLUSÕES
A amostra do ISA-Capital 2015 gerou estimativas situadas nos patamares previstos de precisão, tanto as de nível municipal como regional. Foi acertada a decisão de estabelecer as coordenadorias regionais de saúde do município de São Paulo como domínios de estudo.
DESCRITORES
# INTRODUCTION
It is important to know the sampling plans used in epidemiological surveys and the evaluation of the alternatives applied to improve the practice of household surveys. There are few publications on this subject in the Brazilian literature to support new experiences11. Carandina L, Sanches O, Carvalheiro JR. Análise das condições de saúde e de vida da população urbana de Botucatu, SP: I - Descrição do plano amostral e avaliação da amostra. Rev Saude Publica. 1986;20(6):465-74. https://doi.org/10.1590/S0034-89101986000600008
https://doi.org/10.1590/S0034-8910198600...
2. Alves MCGP, Gurgel SM, Almeida MCRR. Plano amostral para cálculo de densidade larvária de Aedes aegypti e Aedes albopictus no Estado de São Paulo, Brasil. Rev Saude Publica. 1991;25(4):251-6. https://doi.org/10.1590/S0034-89101991000400003
https://doi.org/10.1590/S0034-8910199100...
3. Alves MCGP, Silva NN. Simplificação do método de estimação da densidade larvária de Aedes aegypti no Estado de São Paulo. Rev Saude Publica. 2001;35(5):467-73. https://doi.org/10.1590/S0034-89102001000500010
https://doi.org/10.1590/S0034-8910200100...
4. Barata RB, Moraes JC, Antonio PRA, Dominguez M. Inquérito de cobertura vacinal: avaliação empírica da técnica de amostragem por conglomerados proposta pela Organização Mundial da Saúde. Rev Panam Salud Publica. 2005;17(3):184-90. https://doi.org/10.1590/S1020-49892005000300006
https://doi.org/10.1590/S1020-4989200500...
5. Bussab WO; Grupo de Estudos em População, Sexualidade e Aids. Plano amostral da Pesquisa Nacional sobre Comportamento Sexual e Percepções sobre HIV/Aids, 2005. Rev Saude Publica. 2008;42 Supl 1:12-20. https://doi.org/10.1590/S0034-89102008000800004
https://doi.org/10.1590/S0034-8910200800...
-66. Silva NN, Roncalli AG. Plano amostral, ponderação e efeitos do delineamento da Pesquisa Nacional de Saúde Bucal. Rev Saude Publica. 2013;47 Supl 3:3-11. https://doi.org/10.1590/S0034-8910.2013047004362
https://doi.org/10.1590/S0034-8910.20130...
. It is particularly interesting the provision of subsidies to improve sampling designs in time trend studies, which are based on data from successive surveys. More such studies have been carried out in recent years77. Rizzo L, Moser RP, Waldron W, Wang Z, Davis WW. Analytic methods to examine hanges across years using HINTS 2003 & 2005 data. Bethesda: National Institute of Health, National Cancer Institute; 2009 [cited 2017 Apr 4]. (NIH Publication Nº 08-6435. Available from: https://hints.cancer.gov/docs/HINTS_Data_Users_Handbook-2008.pdf
https://hints.cancer.gov/docs/HINTS_Data...
8. Bacigalupe A, Esnaola S, Martin U. The impact of the Great Recession on mental health and its inequalities: the case of a Southern European region, 1997-2013. Int J Equity Health. 2016;15:17. https://doi.org/10.1186/s12939-015-0283-7
https://doi.org/10.1186/s12939-015-0283-...
9. Beltrán-Sánchez H, Andrade FCFD. Time trends in adult chronic disease inequalities by education in Brazil: 1998-2013. Int J Equity Health. 2016;15(1):139. https://doi.org/10.1186/s12939-016-0426-5
https://doi.org/10.1186/s12939-016-0426-...
-1010. Monteiro CN, Beenackers MA, Goldbaum M, Barros MBA, Gianini RJ, Cesar CL, et al. Socioeconomic inequalities in dental health services in São Paulo, Brazil, 2003-2008. BMC Health Serv Res. 2016;16(1):683. https://doi.org/10.1186/s12913-016-1928-y
https://doi.org/10.1186/s12913-016-1928-...
.
In cities in the State of São Paulo, Brazil, health surveys called ISA have been carried out since 2001. The objective is to evaluate the health status of the population living in the city, according to their living conditions, addressing aspects related to lifestyle, acute and chronic morbidities, preventive practices, and use of health services1111. Cesar CLG, Barros MBA, Alves MCGP, Carandina L, Goldbaum M. Saúde e Condição de Vida em São Paulo - Inquérito Multicêntrico de Saúde no Estado de São Paulo - ISA-SP. São Paulo: USP/FSP; 2005. Resenha de Almeida MF. Cienc Saude Coletiva. 2006;11(4):1131. https://doi.org/10.1590/S1413-81232006000400033
https://doi.org/10.1590/S1413-8123200600...
. They are conducted by a team of researchers from public universities of São Paulo and the State Department of Health of São Paulo. Editions were carried out in 2003a a Cesar CLG, Carandina L, Alves MCGP, Barros MBAB, Goldbaum M. Inquéritos de Saúde no Município de São Paulo - ISA-Capital 2003. São Paulo: USP/FSP; 2003 [cited 2017 Apr 4]. Available from: http://www.fsp.usp.br/isa-sp/old/index_arquivos/Page3157.htm , 2008b b Cesar CLG, Carandina L, Alves MCGP, Barros MBAB, Goldbaum M. Inquéritos de Saúde no Município de São Paulo - ISA-Capital 2008. São Paulo: USP/FSP; 2008 [cited 2017 Apr 4]. Available from: http://www.fsp.usp.br/isa-sp/old/index_arquivos/Page1494.htm , and 2015c c Seminário Inquérito de Saúde: ISA-Capital 2015; 31 mar 2016; São Paulo, SP. São Paulo: FSP-USP; c2010 [cited 2017 Apr 4]. Available from: http://www.fsp.usp.br/site/eventos/mostrar/5523 , in the city of São Paulo with most support from the City Health Department, and in 2001, 2008d d UNICAMP, Faculdade de Ciências Médicas, Centro Colaborador em Análise da Situação de Saúde. Inquérito de Saúde do Município de Campinas – ISACamp 2008. Campinas: CCAS; 2008 [cited 2017 Apr 4]. Available from: http://www.fcm.unicamp.br/fcm/ccas-centro-colaborador-em-analise-de-situcao-de-saude/isacamp/2008 , and 2014/2015 in the city of Campinase e UNICAMP, Faculdade de Ciências Médicas, Centro Colaborador em Análise da Situação de Saúde. Inquérito de Saúde do Município de Campinas – ISACamp 2014/2015. Campinas: CCAS; 2014 [cited 2017 Apr 4]. Available from: http://www.fcm.unicamp.br/fcm/ccas-centro-colaborador-em-analise-de-situcao-de-saude/isacamp/2014 .
In these surveys, probabilistic sampling is used, always seeking inferences to the study population based on measures of precision. Although they are similar, the sampling plans used in the different years of the ISA-Capital have different aspects. Their adoption was motivated by the desire to improve the process of data collection based on acquired experiences, preserving the possibility of comparison between the different editions.
The planning of the 2015 survey was based on the interest to produce information on smaller areas of the city, which are more homogeneous in relation to the epidemiological profile. Consistent with this objective, the City Health Department, the main funder of the project, intended to reinforce the use of results by regional managers. This confluence of interests culminated in the definition of regional Health Coordinations of the city of São Paulof f Prefeitura de São Paulo. Secretaria Municipal de Saúde: organização. São Paulo; c2017 [cited 2018 Feb 1]. Available from: http://www.prefeitura.sp.gov.br/cidade/secretarias/saude/organizacao/ as domains of study in the ISA-Capital 2015.
The objective of this study was to evaluate the sampling plan of the ISA-Capital 2015 regarding the precision of estimates and the conformation of the domains of study by the Health Coordinations of the city of São Paulo, Brazil.
# METHODS
Below, we describe the sampling plan of ISA-Capital 2015, highlighting the following aspectcs: population and domains of study and sampling procedures, including calculation of sample size, and random selection of sample units. In addition, we present the results of the application of the sampling plan, considering the households visited and the interviews obtained.
The estimates obtained with the ISA-Capital 2015 sample for the parameters of interest were analyzed for precision using the coefficient of variation. Estimates with coefficients below 20% for the city level and below 30% for the regional were considered sufficiently precise. Thus, we would consider as suitable the establishment of the Health Coordinations as the domains of study if the estimates of proportions according to the age and sex domains had minimum acceptable precision within the coordinations, indicated by coefficients of variation below 30%.
We also evaluated the measures of effect of design, widely used as measures of efficiency of complex sampling designs1212. Sarndal CE, Swensson B, Wretman J. Model assisted survey sampling. New York: Springer Verlag; 1992.,1313. Cochran WG. Sampling techniques. 3.ed. New Work: Wiley; 1977.. Those below 1.5 were considered suitable, which was adopted in the planning of the sample. We also verified the frequency of estimates below 2.0, which is frequently adopted in sampling plans33. Alves MCGP, Silva NN. Simplificação do método de estimação da densidade larvária de Aedes aegypti no Estado de São Paulo. Rev Saude Publica. 2001;35(5):467-73. https://doi.org/10.1590/S0034-89102001000500010
https://doi.org/10.1590/S0034-8910200100...
,44. Barata RB, Moraes JC, Antonio PRA, Dominguez M. Inquérito de cobertura vacinal: avaliação empírica da técnica de amostragem por conglomerados proposta pela Organização Mundial da Saúde. Rev Panam Salud Publica. 2005;17(3):184-90. https://doi.org/10.1590/S1020-49892005000300006
https://doi.org/10.1590/S1020-4989200500...
,66. Silva NN, Roncalli AG. Plano amostral, ponderação e efeitos do delineamento da Pesquisa Nacional de Saúde Bucal. Rev Saude Publica. 2013;47 Supl 3:3-11. https://doi.org/10.1590/S0034-8910.2013047004362
https://doi.org/10.1590/S0034-8910.20130...
,1414. Silva EPC, Nakao N, Joarez E. Plano amostral para avaliação da cobertura vacinal. Rev Saude Publica. 1989;23(2):152-61. https://doi.org/10.1590/S0034-89101989000200009
https://doi.org/10.1590/S0034-8910198900...
.
The parameters estimated in this study were the prevalence of persons who reported the following: use of health service in the last 30 days, hospitalization in the last year, visit to the dentist in the last year, hypertension, allergy, health problem in the last 15 days, and excellent or good self-assessment of health. These parameters were related to the following subjects: use of services, morbidity, and self-assessment of health, usually studied in health surveys. The reference to allergy was selected because it was the only morbidity in which the estimates for adolescents were greater than 10% for the most part.
The reference population of the ISA-Capital 2015 consisted of individuals aged 12 years or more living in permanent private households in the urban area of the city of São Paulo (Table 1, block 1)g g Instituto Brasileiro de Geografia e Estatística. Censo Demográfico 2000: agregado por setores censitários dos resultados do universo. 2.ed. Rio de Janeiro: IBGE; 2003 [cited 2017 Apr 4]. Available from: ftp://ftp.ibge.gov.br/Censos/Censo_Demografico_2000/Dados_do_Universo/Agregado_por_Setores_Censitarios . For the delimitation of the population, the survey used the census tracts classified in the 2010 Census as urban situation – urbanized area, non-urbanized area, and isolated urbanized area – and ‘common’ and ‘special subnormal’ types.
Table 1
Reference population, planned sample of persons and households, and person/household ratio according to age and sex groups and Health Coordination. São Paulo, State of São Paulo, Brazil, ISA-Capital 2015.
Stratified sampling was used and clusters were selected in two stages: census tracts and households.
The strata were formed by the five Health Coordinations of the city of São Paulo: North, Central-West, Southeast, South, and East, which were domains of study. For the sample planning, we also considered the age and sex groups as domains: adolescents (12 to 19 years), male adults (men aged 20 to 59 years), female adults (women aged 20 to 59 years), and older adults (60 years or more). We defined 20 domains of study, both geographic and demographic.
For operational reasons, the total sample size would be 4,250 persons. In order for the Health Coordinations to have the same potential for data analysis, 850 persons were assigned to each one. The sample would have the distribution presented in Table 1 (block 2) if the distribution by the age and sex domains were proportional to the population of these domains in each Coordination. However, the participation of the “adolescent” and “older adult” groups was changed in the sample for more precise estimates in these domains. A 50% larger adolescent population was considered, as well as a 100% larger older population, and a new distribution of the sample was carried out. The numbers of interviews was increased to 150 for two domains: adolescents of the Central-West and Southeast Coordinations (Table 1, block 3).
This number could allow the estimation of proportions of 0.50, with a sampling error of 0.10, considering a 95% confidence level and a design effect of 1.5. The calculation was carried out from the algebraic expression that determines the minimum sample size to estimate proportions under complex samples1313. Cochran WG. Sampling techniques. 3.ed. New Work: Wiley; 1977.,1515. Kish L. Survey sampling. New York: John Wiley; 1965.: $n=P×(1−P)(d/z)2×deff$, where n is the sample size, P is the parameter to be estimated, z = 1.96 is the value in the reduced normal curve related to the 95% confidence level of the confidence intervals, d is the sampling error, and deff is the effect of the design.
The expected mean number of persons per household (ratio between persons and households) was calculated in each domain from the 2010 Census data (Table 1, block 4) to determine the number of households that interviews should be conducted. The number of households was obtained by dividing the sample size of each domain by the respective ratio between persons and households (Table 1, block 5).
However, in order to reached the minimum number of interviews in the presence of non-response (vacant or closed households, refusals, or households with a resident unable to respond), the inclusion of a larger number of households in the sample was planned (Table 1, block 6). A non-response rate of 40% and a percentage of vacant households of 10% were considered.
The interviewees were randomly selected using two-stage sampling. In the first stage, 30 census tracts were randomly selected in each Coordination, with probability proportional to size, measured by the number of permanent private households counted in the 2010 Census, sorted by the average per capita income of the households in the tract.
In the second stage, the households were selected using two different random selections. In tracts classified by the IBGEg g Instituto Brasileiro de Geografia e Estatística. Censo Demográfico 2000: agregado por setores censitários dos resultados do universo. 2.ed. Rio de Janeiro: IBGE; 2003 [cited 2017 Apr 4]. Available from: ftp://ftp.ibge.gov.br/Censos/Censo_Demografico_2000/Dados_do_Universo/Agregado_por_Setores_Censitarios as “common”, the households were systematically selected, based on the list of households carried out in the field. In the census tracts classified as “special subnormal” (which corresponds to favela slums in the city of São Paulo), segments of households were created (mean size of six households). These segments were the second stage of selection, in which the random selection of six segments per tract was planned.
The households were randomly selected corresponding to the rarest domain (adolescents in the Central-West region and older adults in the other four Coordinations) in each tract, and this sample was called the main sample. From the main sample, sub-samples were randomly selected with sizes defined for the other age and sex domains (Table 1, block 7). This type of random selection is equivalent to obtaining four concomitant samples, related to the four domains of study.
There was no intra-household random selection. All persons belonging to the domain for which the household was selected were included in the sample. In the data collection equipment of the interviewers, it was indicated the domains to be searched in each household of the sample.
The overall sampling fractions in each Coordination were:
1. in the rarest domain: $f=30×MiM×bMi$
2. in the other domains: $f=30×MiM×bMi×bdomíniob$
where Mi is the number of households in tract i (data from the 2010 Census), M is the total number of households in the Coordination (data from the 2010 Census), b is the number of households in the rarest domain, i.e., the main sample, and bdomain is the number of households required for each of the three less rare domains.
The second-stage sampling fraction was fixed, which increased (or decreased) the number of households randomly selected in relation to what was planned if the census tract had grown (or decreased) since the 2010 Census. With this option, the second-stage sampling fraction can be rewritten as: $b(Mi'/Ml)Mi'$, where Mi' is the number of households in tract i obtained in the listing of households, performed in the field.
In order to compensate for the differences between the probabilities of the random selection of the individuals in the sample, design weights were introduced in the data analysis step, expressed as the inverse of the sampling fractions, F=1/f (Table 1, block 8)1616. Silva NN. Amostragem probabilística: um curso introdutório. 3.ed. São Paulo: EDUSP; 2015.. This weight can be interpreted as the number of persons in the population “represented” for each person randomly selected.
# RESULTS
The fieldwork of ISA-Capital began in the second half of 2014, but 80.0% of the interviews were conducted in 2015, between January and December. A total of 5,942 households were effectively selected and visited. Of these, 8.0% were vacant households, which amounted to 5,469 occupied houses (Table 2). Information could be obtained in 76.4% of the occupied households about the residents and the presence of persons belonging to the age and sex groups of interest. In these households, 73.4% of the eligible residents were interviewed.
Table 2
Occupied households visited, interviews conducted, and mean interviews by census tract, according to age and sex groups and Health Coordination. São Paulo, State of São Paulo, Brazil, ISA-Capital 2015.
The number of households randomly selected was higher than that considered necessary for the interviews (n = 4,831), provided for in the sampling plan. Nevertheless, the number of interviews was smaller than planned. The minimum of 150 interviews was not reached in two domains (adolescents and adult males in the Central-West Coordination). The Coordinations that had a smaller number of interviews were North (9.0% smaller) and Central-West (28.0% smaller). The target number of interviews for the total of the city was reached for adolescents and older adults and it was further from what was proposed for the group of adult males (18.0% smaller).
Of the 115 estimates made for the domains of study, 97.4% presented coefficients of variation below 30.0%, and 82.6% were below 20.0% (Tables 3 and 4). Of the few estimates that did not reach the desired level of precision (three estimates), one was obtained with a small sample of interviews, below 150, and one estimate was below 0.10, which means an event of very small frequency. All prevalence estimates above 0.30 showed low coefficients of variation; the inverse happened for the estimates below 0.10; none reached the desired levels of accuracy. Of the 24 estimates made for the city, almost all (23 estimates) showed a coefficient of variation below 20%.
Table 3
Number of interviews, prevalence estimates, confidence intervals, coefficients of variation, and effects of design, among men and women aged 20 to 59 years. São Paulo, State of São Paulo, Brazil, ISA-Capital 2015.
Table 4
Number of interviews, prevalence estimates, confidence intervals, coefficients of variation, and effects of design in adolescents aged 12 to 19 years and older adults aged 60 years or more. São Paulo, State of São Paulo, Brazil, ISA-Capital 2015.
More than two-thirds (69.0%) of the estimates of the design effect were below 1.5, which was estimated in the sample size calculation, and the design effect was below 2.0 for 88.0%.
The mean number of interviews per tract for the age and sex groups for the set of Coordinations ranged from 5.7 to 8.1.
# DISCUSSION
The ISA-Capital 2015 sample generated estimates at the predicted levels of precision at both the city and regional levels, which indicates that the decision to establish the regional health coordinations of the city of São Paulo as domains of study was adequate.
There is no single criterion adopted universally to establish a limit for the values of coefficient of variation. Several factors must be considered. The knowledge on whether a particular coefficient of variation is too high or too low requires experience on similar data1717. Steel RGD, Torrie JH, Dickey DA. Principles and procedures of statistics: a biometrical approach. 3.ed. New York: McGraw-Hill; 1997. (McGraw-Hill Series in Probability and Statistics).. The Fundação Sistema Estadual de Análise de Dados (SEAD), responsible for several surveys in the State of São Paulo, guides its decision according to the frequency of the survey and the nature of the phenomenon under study. It does not adopt a single policy for the dissemination of the results of the research it carries outh h Dini NP. Pesquisa por amostragem: política de divulgação de estimativas com baixa precisão amostral. Rio de Janeiro: IBGE; s.d. [cited 2017 Apr 4]. Available from: https://www.ibge.gov.br/confest_e_confege/pesquisa_trabalhos/CD/mesas_redondas/294-1.pdf . Thus, different limits for the coefficient of variation were stipulated in the various surveys conductedi i Instituto Brasileiro de Geografia e Estatística. Pesquisa de Condições de Vida - PCV 2006. Rio de Janeiro: IBGE; 2006 [cited 2017 Apr 22]. Available from: http://produtos.seade.gov.br/produtos/pcv/pdfs/aspectos_metodologicos_pcv2006.pdf . When disclosing the results of the Household Expenditure Survey of 2015/2016, the National Statistical Institute of Portugal proposed that estimates with coefficients of variation between 20% and 30% should be carefully used and those with coefficients above 30% should be disregardedj j Instituto Nacional de Estatística (PT). Orçamentos Familiares: inquérito às despesas das famílias – 2015-2016. Lisboa: INE; 2017 [cited 2017 Apr 22]. Available from: https://www.ine.pt/xportal/xmain?xpid=INE&xpgid=ine_publicacoes&PUBLICACOESpub_boui=277098526&PUBLICACOESmodo=2&xlang=pt . These limits, as well as those proposed in other health studies33. Alves MCGP, Silva NN. Simplificação do método de estimação da densidade larvária de Aedes aegypti no Estado de São Paulo. Rev Saude Publica. 2001;35(5):467-73. https://doi.org/10.1590/S0034-89102001000500010
https://doi.org/10.1590/S0034-8910200100...
,66. Silva NN, Roncalli AG. Plano amostral, ponderação e efeitos do delineamento da Pesquisa Nacional de Saúde Bucal. Rev Saude Publica. 2013;47 Supl 3:3-11. https://doi.org/10.1590/S0034-8910.2013047004362
https://doi.org/10.1590/S0034-8910.20130...
, coincide with those adopted in our study.
The number of households effectively selected was higher than planned. The use of constant fractions in the random selection in the second sampling step may be responsible for this result. With this strategy, the 38% increase in the number of households between the Census and the survey data collection was reflected in the number of households sampled. The equiprobability of the sample was kept by the random selection with probability proportional to size, sacrificing control over its final size.
In addition, sampling fractions were changed in the tracts not yet visited when the follow-up of the field work detected that the non-response rates were greater than expected. This further increased the number of households randomly selected. These increases were offset by the use of weights in the data analysis.
The follow-up of the field work by the team responsible for the survey was carried out through spreadsheets, whose models were improved throughout the various editions of the ISA project. The detailing of the response rates at the household and resident levels by census tract together with the interviews allowed problems to be detected as soon as they occurred. This helped the introduction of adjustments in the sampling plan.
The number of interviews was lower than planned, which shows that population participation in the survey was lower than expected. All households were visited at least three times, at different times and days, which did not prevent high non-response rates.
Although the sample size of the ISA-Capital 2015 is similar to previous editions, the field work was extended for a longer period, mainly due to the greater number of census tracts selected (150 in 2015, 80 in 2008, and 60 in 2003). This was a necessity created by the option of adopting the Health Coordinations as domains of study, setting the number of tracts to 30 in each one. It can be understood as the cost of obtaining regional estimates in this edition of the ISA.
The increase in the number of census tracts meant a smaller number of interviews by tract: 5.7 to 8.1, on average, by age and sex domain. These numbers are far from the optimal number of interviews in each primary sampling unit. This number seeks the balance between precision and cost, considering the ratio between the costs of including a new conglomerate and a new household in the sample, in addition to the degree of intra-cluster homogeneity1818. Yansaneh IS. Overview of sample design issues for household surveys in developing and transition countries. In: UN Department of Economic and Social Affairs, Statistics Division. Household sample surveys in developing and transition countries. New York: United Nations; 2005. Chapter 2. (Studies in Methods; Series F, 96).,k k Where C1 is the cost of an additional tract and C2 is an additional interview, and ρ is the degree of intra-cluster homogeneity. . For a 20-fold cost of including a new conglomerate compared to including a new interview1919. Aliaga A, Ruilin R. Cluster optimal sample size for demographic and health surveys. In: 7th International Conference on Teaching Statistics – ICOTS 7; 2006 Jul 2-7; Salvador, Bahia, Brazil. The Hague: International Statistical Institute; 2006., and considering a degree of homogeneity of 0.05l j Value based on results observed in a previous survey carried out in the city of São Paulo (ISA-Capital), in which most of the health variables studied presented values below 0.05. , the indicated number of interviews for each tract would be 20. However, the decrease in the concentration of interviews by tract, although increasing the cost, had the advantage of increasing precision. This contributed to the small effect of design. The random selection of cluster as opposed to simple selection often increases the variance of the estimates according to the intraclass correlation, which is a characteristic of the population that cannot be changed by the sampling process. However, the inclusion of fewer elements per cluster in the sample can reduce the impact of intraclass correlation on variance, leading to smaller estimates for the design effect.
The random selection adopted in the ISA-Capital, in which the four samples related to the four age and sex domains are obtained simultaneously, relativizes the importance of the increase in cost. The number of interviews per tract, considering the four domains, was between 20.3 and 32.5, depending on the Coordination.
One of the consequences of using weights in the data analysis step is the increase of the estimates of the design effect, and the increase is proportional to the variation between applied weights2020. Kalton G, Brick JM, Le T. Estimating components of design effects for use in sample design. In: UN Department of Economic and Social Affairs, Statistics Division. Household sample surveys in developing and transition countries. New York: United Nations; 2005. Chapter 6. (Studies in Methods; Series F, 96).. In the first ISA editions, sample size was the same for all age and sex domains. This resulted in very different weights, which impacted the design effect when more than one domain was analyzed together. The 2015 survey sought the closer proportional distribution of the sample by the age and sex domains in each Coordination, avoiding the previously observed discrepancy between weights.
One of the characteristics common to all issues of the ISA-Capital is the non-use of intra-household selection. In terms of efficiency, the strategy of interviewing all residents belonging to the age and sex group of interest is superior to that in which only one of the residents of the household is randomly selected for the interview2121. Alves MCGP, Escuder MML, Claro RM, Silva NN. Sorteio intradomiciliar em inquéritos de saúde. Rev Saude Publica. 2014;48(1):86-93. https://doi.org/10.1590/S0034-8910.2014048004540
https://doi.org/10.1590/S0034-8910.20140...
. To apply it, the option of the ISA is to select randomly a main sample and, from it, obtain subsamples of households, according to the need of each domain defined based on the mean number of persons per household indicated in the Census. With the adequate number of households for each domain, there is no need for an intra-household selection.
Based on data from previous editions of the ISA, Alves et al.2222. Alves MCGP, Morais MLS, Escuder MML, Goldbaum M, Barros MBA, Cesar CLG, et al. Sorteio de domicílios em favelas em inquéritos por amostragem. Rev Saude Publica. 2011;45(6):1099-109. https://doi.org/10.1590/S0034-89102011000600012
https://doi.org/10.1590/S0034-8910201100...
have shown that it is particularly advantageous to use segments as an alternative to full address listing when applied to the random selection of households in favela slums. In the ISA-Capital 2015, in addition to being used in favela slums, this strategy was also applied in the last tracts to fasten the field work. Among the advantages associated with the use of segments, we can highlight the speed in locating and identifying households.
The estimates of prevalence within Health Coordinations, for the most part, were considered suitable for all age and sex groups defined as a domain in the ISA-Capital 2015. This result allows the use of data from the survey by health managers in the city of São Paulo, who will have regional information that is sufficiently precise to assess issues related to reported morbidity and the use of services. However, it is important to be aware of the use of results related to rare events, especially when done with small samples.
It was a good choice to follow this path so that the sample in this edition of ISA-Capital could have the data disaggregated by Health Coordinations. The comparison of the results of different regions of the city can help in the understanding of the determinants of the epidemiological situation of the resident population and aspects related to the use of health services available in the area. The government of the city of São Paulo has prepared reports that analyze the data related to the various subjects addressed in the researchm m Prefeitura de São Paulo, Secretaria Municipal de Saúde. Publicações sobre ISA-Capital - SP. São Paulo; 2014 [cited 2017 Apr 4]. Available from: http://www.prefeitura.sp.gov.br/cidade/secretarias/saude/epidemiologia_e_informacao/isacapitalsp/index.php?p=177260 . This production shows the potential contribution of the survey in the analysis of health problems of the population of the city and the adequacy of the coping strategies adopted. The repetition of surveys in the city meets the interest of studying trends in several measures related to the health of the population living in it.
# Acknowledgments
To the field team and coordinators: Margaret Harrison de Santis Dominguez, Mariângela Pereira Nepomuceno Silva, Fernanda Mello Zanetta, and Cleiton Eduardo Fiório.
# REFERENCES
• 1
Carandina L, Sanches O, Carvalheiro JR. Análise das condições de saúde e de vida da população urbana de Botucatu, SP: I - Descrição do plano amostral e avaliação da amostra. Rev Saude Publica 1986;20(6):465-74. https://doi.org/10.1590/S0034-89101986000600008
» https://doi.org/10.1590/S0034-89101986000600008
• 2
Alves MCGP, Gurgel SM, Almeida MCRR. Plano amostral para cálculo de densidade larvária de Aedes aegypti e Aedes albopictus no Estado de São Paulo, Brasil. Rev Saude Publica 1991;25(4):251-6. https://doi.org/10.1590/S0034-89101991000400003
» https://doi.org/10.1590/S0034-89101991000400003
• 3
Alves MCGP, Silva NN. Simplificação do método de estimação da densidade larvária de Aedes aegypti no Estado de São Paulo. Rev Saude Publica 2001;35(5):467-73. https://doi.org/10.1590/S0034-89102001000500010
» https://doi.org/10.1590/S0034-89102001000500010
• 4
Barata RB, Moraes JC, Antonio PRA, Dominguez M. Inquérito de cobertura vacinal: avaliação empírica da técnica de amostragem por conglomerados proposta pela Organização Mundial da Saúde. Rev Panam Salud Publica 2005;17(3):184-90. https://doi.org/10.1590/S1020-49892005000300006
» https://doi.org/10.1590/S1020-49892005000300006
• 5
Bussab WO; Grupo de Estudos em População, Sexualidade e Aids. Plano amostral da Pesquisa Nacional sobre Comportamento Sexual e Percepções sobre HIV/Aids, 2005. Rev Saude Publica 2008;42 Supl 1:12-20. https://doi.org/10.1590/S0034-89102008000800004
» https://doi.org/10.1590/S0034-89102008000800004
• 6
Silva NN, Roncalli AG. Plano amostral, ponderação e efeitos do delineamento da Pesquisa Nacional de Saúde Bucal. Rev Saude Publica 2013;47 Supl 3:3-11. https://doi.org/10.1590/S0034-8910.2013047004362
» https://doi.org/10.1590/S0034-8910.2013047004362
• 7
Rizzo L, Moser RP, Waldron W, Wang Z, Davis WW. Analytic methods to examine hanges across years using HINTS 2003 & 2005 data. Bethesda: National Institute of Health, National Cancer Institute; 2009 [cited 2017 Apr 4]. (NIH Publication Nº 08-6435. Available from: https://hints.cancer.gov/docs/HINTS_Data_Users_Handbook-2008.pdf
» https://hints.cancer.gov/docs/HINTS_Data_Users_Handbook-2008.pdf
• 8
Bacigalupe A, Esnaola S, Martin U. The impact of the Great Recession on mental health and its inequalities: the case of a Southern European region, 1997-2013. Int J Equity Health 2016;15:17. https://doi.org/10.1186/s12939-015-0283-7
» https://doi.org/10.1186/s12939-015-0283-7
• 9
Beltrán-Sánchez H, Andrade FCFD. Time trends in adult chronic disease inequalities by education in Brazil: 1998-2013. Int J Equity Health 2016;15(1):139. https://doi.org/10.1186/s12939-016-0426-5
» https://doi.org/10.1186/s12939-016-0426-5
• 10
Monteiro CN, Beenackers MA, Goldbaum M, Barros MBA, Gianini RJ, Cesar CL, et al. Socioeconomic inequalities in dental health services in São Paulo, Brazil, 2003-2008. BMC Health Serv Res 2016;16(1):683. https://doi.org/10.1186/s12913-016-1928-y
» https://doi.org/10.1186/s12913-016-1928-y
• 11
Cesar CLG, Barros MBA, Alves MCGP, Carandina L, Goldbaum M. Saúde e Condição de Vida em São Paulo - Inquérito Multicêntrico de Saúde no Estado de São Paulo - ISA-SP. São Paulo: USP/FSP; 2005. Resenha de Almeida MF. Cienc Saude Coletiva 2006;11(4):1131. https://doi.org/10.1590/S1413-81232006000400033
» https://doi.org/10.1590/S1413-81232006000400033
• 12
Sarndal CE, Swensson B, Wretman J. Model assisted survey sampling. New York: Springer Verlag; 1992.
• 13
Cochran WG. Sampling techniques. 3.ed. New Work: Wiley; 1977.
• 14
Silva EPC, Nakao N, Joarez E. Plano amostral para avaliação da cobertura vacinal. Rev Saude Publica 1989;23(2):152-61. https://doi.org/10.1590/S0034-89101989000200009
» https://doi.org/10.1590/S0034-89101989000200009
• 15
Kish L. Survey sampling. New York: John Wiley; 1965.
• 16
Silva NN. Amostragem probabilística: um curso introdutório. 3.ed. São Paulo: EDUSP; 2015.
• 17
Steel RGD, Torrie JH, Dickey DA. Principles and procedures of statistics: a biometrical approach. 3.ed. New York: McGraw-Hill; 1997. (McGraw-Hill Series in Probability and Statistics).
• 18
Yansaneh IS. Overview of sample design issues for household surveys in developing and transition countries. In: UN Department of Economic and Social Affairs, Statistics Division. Household sample surveys in developing and transition countries. New York: United Nations; 2005. Chapter 2. (Studies in Methods; Series F, 96).
• 19
Aliaga A, Ruilin R. Cluster optimal sample size for demographic and health surveys. In: 7th International Conference on Teaching Statistics – ICOTS 7; 2006 Jul 2-7; Salvador, Bahia, Brazil. The Hague: International Statistical Institute; 2006.
• 20
Kalton G, Brick JM, Le T. Estimating components of design effects for use in sample design. In: UN Department of Economic and Social Affairs, Statistics Division. Household sample surveys in developing and transition countries. New York: United Nations; 2005. Chapter 6. (Studies in Methods; Series F, 96).
• 21
Alves MCGP, Escuder MML, Claro RM, Silva NN. Sorteio intradomiciliar em inquéritos de saúde. Rev Saude Publica 2014;48(1):86-93. https://doi.org/10.1590/S0034-8910.2014048004540
» https://doi.org/10.1590/S0034-8910.2014048004540
• 22
Alves MCGP, Morais MLS, Escuder MML, Goldbaum M, Barros MBA, Cesar CLG, et al. Sorteio de domicílios em favelas em inquéritos por amostragem. Rev Saude Publica 2011;45(6):1099-109. https://doi.org/10.1590/S0034-89102011000600012
» https://doi.org/10.1590/S0034-89102011000600012
# Publication Dates
• Publication in this collection
03 Sept 2018
• Date of issue
2018
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.490548700094223, "perplexity": 9657.760923662428}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710898.93/warc/CC-MAIN-20221202050510-20221202080510-00830.warc.gz"}
|
https://www.physicsforums.com/threads/polarized-electron-in-a-rotating-reference-frame.971485/
|
# B Polarized Electron in a Rotating Reference Frame
• Start date
#### metastable
514
45
I tried asking a similar question in cosmology but got no answer there so here goes...
Suppose I am on a windowless spacecraft in the middle of an intergalactic void. I know that the spacecraft is spinning from measuring the centrifugal forces but have no way of observing the outside universe other than what occurs in my spacecraft. At the center of mass in the spacecraft is an electron trap containing an electron with its spin axis polarized along an axis of the spacecraft in such a way that for each rotation of the spacecraft, the electron spin axis completes one rotation. I now release the electron from the spacecraft in such a way that the electron's spin axis continues to rotate at the same rate that it did when it was inside the craft. Now the spacecraft moves away to a great distance and using thrusters reduces its angular speed to zero. I know that the electron is polarized in a rotating reference frame.. The reference frame of the polarized electron is now rotating relative to what?
Related Special and General Relativity News on Phys.org
#### Nugatory
Mentor
12,337
4,821
You're asking about the spin of an electron, and that brings in all the complexities of quantum mechanical half-integral spin. Did you intend that, or would a rotating ball suffice for the question you're trying to ask? I'm assuming a rotating ball in the comments below and in that case...
Now the spacecraft moves away to a great distance and using thrusters reduces its angular speed to zero......The reference frame of the polarized electron is now rotating relative to what?
The motion of the spacecraft is completely irrelevant to the behavior of the electron.
There is a non-inertial reference frame (which you're calling "a rotating reference frame" - this is somewhat sloppy language but everyone does it) in which the ship and the object were both initially at rest; after the ship moves the object is still at rest using this frame while the ship is moving in a giant circle.
There is an inertial reference in which the object and the ship were both initially rotating; after teh ship moves and kills its angular momentum the ship is now at rest using that frame and the object is still rotating.
You are free to describe the entire situation, both before and after the ship moves, using either frame, or any other that you choose. In practice, you will want to choose whichever ones makes it easiest to calculate whatever you want to calculate.
#### Dale
Mentor
28,324
4,668
The reference frame of the polarized electron is now rotating relative to what?
Relative to any inertial frame, including the one where the spacecraft is now at rest.
#### vanhees71
Gold Member
13,313
5,241
You're asking about the spin of an electron, and that brings in all the complexities of quantum mechanical half-integral spin. Did you intend that, or would a rotating ball suffice for the question you're trying to ask? I'm assuming a rotating ball in the comments below and in that case...
The motion of the spacecraft is completely irrelevant to the behavior of the electron.
There is a non-inertial reference frame (which you're calling "a rotating reference frame" - this is somewhat sloppy language but everyone does it) in which the ship and the object were both initially at rest; after the ship moves the object is still at rest using this frame while the ship is moving in a giant circle.
There is an inertial reference in which the object and the ship were both initially rotating; after teh ship moves and kills its angular momentum the ship is now at rest using that frame and the object is still rotating.
You are free to describe the entire situation, both before and after the ship moves, using either frame, or any other that you choose. In practice, you will want to choose whichever ones makes it easiest to calculate whatever you want to calculate.
Ironically, $s=1/2$, in quantum mechanics is the most simple case of spin you can think of. It leads to the notion of two-level systems, with which you can explain a lot in introductory quantum mechanics (in fact nearly all conceptual principles can be explained by this simple example and an only somewhat more complicated case of two, three,... spins 1/2).
Ironically, what's really difficult is to describe spin in the macroscopic relativistic realm. It's all pretty simple, if not beautiful, in non-relativistic physics, where you have as an example the rigid body (spinning top). If it comes to both special and general relativistic macroscopic physics, I think it's still not completely solved how to really describe it in all details. There's of course a plethora of papers on this over some decades, but it's still not really understood.
For some recent work, see e.g.,
#### Nugatory
Mentor
12,337
4,821
Ironically, s=1/2s=1/2s=1/2, in quantum mechanics is the most simple case of spin
I appreciate the irony, but what you're (reasonably) calling "the most simple case of quantum mechanical spin" is something that everyone else is calling "a can of worms that should not be opened in a B-level relativity thread"
what's really difficult is to describe spin in the macroscopic relativistic realm.
That's also true, but this question doesn't appear to involve rotational speeds high enough to introduce the relativistic complications.
#### vanhees71
Gold Member
13,313
5,241
Hm, well. I thought it's about relatistic dynamics because of the reference to cosmology and setting it up in outer space in a space craft ;-). Unfortunately, I think even the nonrelativistic treatment of quantum mechanical systems in non-inertial frames of reference is way beyond B level.
I think Schmutzer is the standard reference for this problem:
https://onlinelibrary.wiley.com/doi/pdf/10.1002/prop.19770250102
#### metastable
514
45
You're asking about the spin of an electron, and that brings in all the complexities of quantum mechanical half-integral spin. Did you intend that, or would a rotating ball suffice for the question you're trying to ask?
So if the elevator spins at 1000rad/sec, and discharges a single electron into the vacuum, will the electron retain any of the elevator’s classical angular speed or will the electron’s spin be the entirely quantum-mechanical concept of spin? If the emitted electron does retain some or all of the elevator’s 1000rad/sec classical spin, what would this additional spin (above and beyond the intrinsic quantum mechanical spin) be relative to?
then I posted my slightly more refined question in the first post in the high energy / particle physics section and it was moved here.
Suppose I am on a windowless spacecraft in the middle of an intergalactic void. I know that the spacecraft is spinning from measuring the centrifugal forces but have no way of observing the outside universe other than what occurs in my spacecraft. At the center of mass in the spacecraft is an electron trap containing an electron with its spin axis polarized along an axis of the spacecraft in such a way that for each rotation of the spacecraft, the electron spin axis completes one rotation. I now release the electron from the spacecraft in such a way that the electron's spin axis continues to rotate at the same rate that it did when it was inside the craft. Now the spacecraft moves away to a great distance and using thrusters reduces its angular speed to zero. I know that the electron is polarized in a rotating reference frame.. The reference frame of the polarized electron is now rotating relative to what?
#### Dale
Mentor
28,324
4,668
I tried asking a similar question in cosmology but got no answer there
$14\ne 0$
It would be better to say that you didn’t understand or didn’t like the answers than that you didn’t get any. You clearly did, and many of the same people post in both places, so they might be irritated at the dismissal of their posts.
#### metastable
514
45
Relative to any inertial frame, including the one where the spacecraft is now at rest.
I appreciate yours and their answers but I don't understand why the lone electron polarized in a "rotating" (or "non-inertial") frame is in fact considered in a non-inertial reference frame. Wouldn't an electron that was polarized in a non-rotating frame take the same path (ie no acceleration) as one that was polarized in a rotating frame (also no acceleration)?
#### Nugatory
Mentor
12,337
4,821
but I don't understand why the lone electron polarized in a "rotating" (or "non-inertial") frame is in fact considered in a non-inertial reference frame.
Everything is always “in” all reference frames, so the object (not an electron! They don’t rotate!) is certainly “in” the non-inertial frame in which the ship was initially at rest.
Note the scare-quotes around the word “in” above. People often speak of something being “in” a reference frame, but that’s sloppy and inaccurate terminology; some of the difficulty here may be that this sloppiness has misled you. Anytime that someone says “in a reference frame”, they’re really saying something more like “using the coordinates assigned by that reference frame” and clearly I can use any reference frame I please to assign coordinates to points on the surface of the rotating object.
There’s a non-inertial frame in which the spatial coordinates of those points are constant (the first one described in my previous post) and inertial one in which those coordinates vary periodically with time (the second one). The rotating object is no more “in” one than the other, and it would be a good exercise for you to try writing down the transformations between those two frames.
#### metastable
514
45
the object (not an electron! They don’t rotate!)
Thank you for your answer but the above quote gets to the heart of what I am trying to understand. You said electrons don't rotate... so does it mean it is not possible to "polarize" one or more electrons in a spinning craft such that changes to the vector of polarization match the rotation rate of the craft? What would that be called if it isn't called "spinning" or "rotating" the electrons? (please excuse my potentially sloppy language-- it's not deliberate).
#### Dale
Mentor
28,324
4,668
I don't understand why the lone electron polarized in a "rotating" (or "non-inertial") frame is in fact considered in a non-inertial reference frame.
I don’t understand your confusion. You are the one who set up the scenario and you are the one who specified that it was at rest in the non-inertial frame. So how can you possibly be confused about that? You are the one who specified it. You even identified that there were fictitious forces. You were very specific.
#### Nugatory
Mentor
12,337
4,821
You said electrons don't rotate... so does it mean it is not possible to "polarize" one or more electrons in a spinning craft such that the vector of polarization matches the rotation rate of the craft? What would that be called if it isn't "spinning" the electrons? (please excuse my potentially sloppy language-- it's not deliberate).
The word “spin” is used with electrons for historical reasons, but it means a quantum-mechanical property of point particles that is altogether unrelated to the classical notion of an object spinning around its axis. Electrons do have a magnetic moment so can be aligned with a magnetic field (I think that’s what you mean by “polarized”) but it has absolutely nothing to do with the magnetic moment of a rotating charged object.
#### metastable
514
45
I don’t understand your confusion. You are the one who set up the scenario and you are the one who specified that it was at rest in the non-inertial frame. So how can you possibly be confused about that? You are the one who specified it.
Because on wikipedia:
A non-inertial reference frame is a frame of reference that is undergoing acceleration with respect to an inertial frame.[1]
^I didn't understand why a lone electron with a changing polarization vector experiences acceleration compared to one with a non changing polarization vector.
#### Nugatory
Mentor
12,337
4,821
Because on wikipedia:
A non-inertial reference frame is a frame of reference that is undergoing acceleration with respect to an inertial frame.[1]
And stuff like that is the reason that Wikipedia is not an acceptable reference under the forum rules. It’s not exactly wrong, but it’s also nowhere near right, and it is unlikely that the anonymous Wikipedian who wrote that understood the subtleties here.
#### Dale
Mentor
28,324
4,668
A non-inertial reference frame is a frame of reference that is undergoing acceleration with respect to an inertial frame.[1]
^I didn't understand why a lone electron with a changing polarization vector experiences acceleration compared to one with a non changing polarization vector.
Hmm, I think that definition is a little confusing. I don’t think I would use it.
Newton’s first law says that a free particle (no interactions with other objects) travels in a straight line at constant speed. This is the principle of inertia. So inertial frames are ones where Newton’s first law holds and non-inertial frames are ones where it does not hold.
The presence of fictitious forces in your frame indicates that it is non inertial because free objects would accelerate due to the fictitious forces.
#### pervect
Staff Emeritus
9,558
831
Thank you for your answer but the above quote gets to the heart of what I am trying to understand. You said electrons don't rotate... so does it mean it is not possible to "polarize" one or more electrons in a spinning craft such that changes to the vector of polarization match the rotation rate of the craft? What would that be called if it isn't called "spinning" or "rotating" the electrons? (please excuse my potentially sloppy language-- it's not deliberate).
The quantum description of spin may be helpful to you. You could perhaps get a better answer in the quantum forum, but I'll give my best shot at it here. What I'm taking away from your posts is that I think you have the idea an electron is literally "spinning". This is not the case. We call the quantum property that the electron has "spin", but it's not actually spinning. What I'll do now is go into a bit what the quantum propety we call "spin" is about, in basic terms.
I could be misunderstanding your point, but this is my interpretation after some thought about what lies behind your question.
Let us start our exploation of "spin" with one of the famous experiments that led us to believe that electrons have spin - the Stern-Gerlach experiment. <<link>>.
In this experiment, neutral silver atoms are shot through a magnetic field. These neutral atoms (not electrons) are deflected either "up" or "down".
This is sometimes described as the beam being split into two "polarized" parts. See for instance the following quote. From another site:
One of the cornerstones of quantum mechanics is the Stern-Gerlach effect. An unpolarized beam of silver atoms is passed through a strong magnetic field gradient and splits into two polarized beams. This effect is one of the main motivations to postulate that electrons have spin, in particular spin-1/2.
I am guessing that this may be what you were thinking of when you wrote the word "polarized", but there is some question in my mind as to what you actually meant by this phrase, it had me scratching my head for a bit. If you meant something else, you might want to clarify.
So, what is going on here? Basically, the silver atoms are acting like little bar magnets. This is called a magnetic dipole moment, or just a magnetic moment.
If the silver atoms acted like classic bar magnets, the beam would not split into two parts. Rather, the beam of atoms would spread out. Some of the little magnets would be oriented one way, others would be oriented in other ways, the orientations would be random. Depending on the orientation of the magnets, they might be attracted in the direction of the magnetic field, repelled and move in the opposite direciton, or be completely unaffected.
What makes the quantum experiment is that the beam does split into two distinct parts, it does not just spread out. One beam is attracted to the magnets, one is repelled. We usually say that the spin is either "up" or "down", (this assumes the magnets are oreinted vertically), and that passing through the Stern-gerlach apparatus "measures" the spin of the silver atom. There is no atom in the beam that is unaffeced passing through the magnetic field - it is deflected one way or the other, there is no "middle ground". This is one of the important differences between the quantum behavior and the classical behavior. And it's not at all inttuitive.
Now - how does this relate to classical spin? If we had a spinning point charge, it would not act like a bar magnet at all. IT would have no magnetic moment. A ball of spinning charge, though, would act like a little bar magnet. So the electron is in some respects a little like a spining ball of charge, in that it has a magnetic moment, but if one tries to ask questions like "how big" the ball of charge is, one does not necessarily get sensible answers. The electron acts like a point particle, not like a little ball of charge, in other experiments.
We can say, unequivocally, that the electron has a magnetic moment, this has been measured experimentally. So it's similar to the spinning ball of charge in that it has a magnetic moment, but it's different than a spinning ball of charge, too.
See for instance the wiki article on the electron magnetic moment <<link>>.
One final point. The Stern Gerlach experiment used silver atoms, and not electrons. This may be a source of puzzlement. Why use silver atoms, why not use electrons? The basic issue is that the electrons are not heavy enough. The wave nature of the electron would make the splitting of the electron beam into two parts not measurable, because of the Heisenberg uncertanity principle. This is from memory - I know I've read this, but alas, I don't recall all the details. And I believe it was rather tehcnical as well, and this is just an overview.
This is a limit of the Stern-Gerlach experiment itself rather than anything really fundamental, more sophisticated experiments can and have detect the magnetic moment of the electron, and it behaves just like the non-classical magnetic moment of the silver atoms in that when we measure it, it's in one of two states, it's either "up" or "down".
Perhaps there is a better way to talk about spin than the Stern-Gerlach experiment, but from what I recall, that's how it's usualy introduced. But I'd certainly encourage you to read more about it spin if you are interested . However, reading popularizations may be of limited use. So I don't have any specific recommendations of what you can read at an introductory level.
The detailed mathematics is quite interesting, there are a pair of complex numbers, the square magnitude of one number gives the probability of finding the electron in the "up"state, the squared magnitude of the other number gives the probability of finding the electron in the "down" state. Electrons can be in what's called a "superposition" of quantum states, as well. It's quite interesting, and very relevant to understanding quantum mechanics, but I think we're drifting away from your quesiton into deeper waters, so I'll stop here.
That's the story in a nutshell.
I've focussed here on the electron, and noted that it's the magnetic moment that's the physical feature associated with 'spin'. I have not discussed the magnetic fields present in a spinning frame of reference containing a classical point charge, but that would be another post - and this one is already rather long.
#### jartsa
1,366
103
I tried asking a similar question in cosmology but got no answer there so here goes...
Suppose I am on a windowless spacecraft in the middle of an intergalactic void. I know that the spacecraft is spinning from measuring the centrifugal forces but have no way of observing the outside universe other than what occurs in my spacecraft. At the center of mass in the spacecraft is an electron trap containing an electron with its spin axis polarized along an axis of the spacecraft in such a way that for each rotation of the spacecraft, the electron spin axis completes one rotation. I now release the electron from the spacecraft in such a way that the electron's spin axis continues to rotate at the same rate that it did when it was inside the craft. Now the spacecraft moves away to a great distance and using thrusters reduces its angular speed to zero. I know that the electron is polarized in a rotating reference frame.. The reference frame of the polarized electron is now rotating relative to what?
Electron does not retain the rotation of its orientation. It's a gyroscope. It retains its orientation really well. The rotation of the orientation is not retained at all, to a very good approximation.
Any other object would be better for this experiment - may I suggest a broom stick.
Last edited:
#### Dale
Mentor
28,324
4,668
I don't think that the OP's question is really about QM, I think it is about reference frames. He just chose an unfortunate example. @metastable can you clarify?
#### metastable
514
45
What I'm taking away from your posts is that I think you have the idea an electron is literally "spinning". This is not the case.
We call the quantum property that the electron has "spin", but it's not actually spinning.
I am guessing that this may be what you were thinking of when you wrote the word "polarized", but there is some question in my mind as to what you actually meant by this phrase, it had me scratching my head for a bit. If you meant something else, you might want to clarify.
So the electron is in some respects a little like a spining ball of charge, in that it has a magnetic moment, but if one tries to ask questions like "how big" the ball of charge is, one does not necessarily get sensible answers. The electron acts like a point particle, not like a little ball of charge, in other experiments.
We can say, unequivocally, that the electron has a magnetic moment, this has been measured experimentally.
Electron does not retain the rotation of its orientation. It's a gyroscope. It retains its orientation really well. The rotation of the orientation is not retained at all, to a very good approximation.
Any other object would be better for this experiment - may I suggest a broom stick.
I don't think that the OP's question is really about QM, I think it is about reference frames. He just chose an unfortunate example. @metastable can you clarify?
Thanks for all the answers. I will try to summarize my remaining confusion on the matter. It pertains to whether the electron can or cannot physically rotate its orientation.
^I read that the electron has a magnetic moment, and the moment is considered a vector pointing from the south to the north pole of the magnet / electron.
Suppose on the spaceship, I've determined by measuring the centrifugal forces that the spaceship is rotating at 1 rad/sec. Suppose the spaceship has an X, Y, and Z axis which are locked relative to the ship (ie the X, Y, and Z of the ships reference frame is also rotating at 1 rad/sec)
Is it possible to do the following:
Suppose the ship is rotating 1rad/sec about its X axis. While the electron is in the trap at the center of mass of the ship, measurements are taken 1.1 seconds apart to determine the orientation of the elecron's magnetic moment relative to the ships X, Y and Z axis. Suppose that within a couple of percentage points of accuracy I determine in both measurements taken 1.1 seconds apart that vector of the electron's magnetic moment is substantially parallel to the Y axis of the ship (ship rotates about X axis). Since the ship rotates 1 rad/sec about the X axis, the vector of the Y axis changes at 1 rad/sec. Since both measurements of the electron's magnetic moment vector taken 1.1seconds apart show that it is parallel within measurement limits to a rotating Y axis, is it reasonable to conclude that the orientation of the vector of the magnetic moment of the electron is rotating at a rate of at least 1 rad/sec?
#### Attachments
• 114.4 KB Views: 26
• 157.3 KB Views: 25
#### Dale
Mentor
28,324
4,668
The gyromagnetic ratio for an isolated electron is 1.76E11 rad/s/T. So an electron with a precession rate of 1 rad/s would imply a magnetic field of 5.7 pT. Form your description of the geometry it would be 5.7 pT in the x direction.
#### metastable
514
45
So an electron with a precession rate of 1 rad/s would imply a magnetic field of 5.7 pT.
Is there a way to release the electron from the craft in such a way that I could expect it to continue precessing unless further acted on?
#### Dale
Mentor
28,324
4,668
Is there a way to release the electron from the craft in such a way that I could expect it to continue precessing unless further acted on?
As long something with that gyromagnetic ratio remains in a magnetic field of 5.7 pT then it will continue precessing regardless of the presence or absence of the ship. However, being a quantum mechanical particle there is a lot of messy quantum mechanical considerations for a single electron including not being able to measure its state without causing the wavefunction to change. You would do much better to consider a classical object with the same gyromagnetic ratio or to consider a large ensemble of electrons such that you can make measurements on the ensemble and obtain results approximately equal to the expectation value (known as the classical limit).
I think that your insistence on using "an electron" is detracting from the substance of your actual question which I believe is about the reference frames and not the quantum mechanics of electrons.
#### Nugatory
Mentor
12,337
4,821
Is there a way to release the electron from the craft in such a way that I could expect it to continue precessing unless further acted on?
The electron is precessing because it is in a powerful magnetic field. There are two possibilities:
1) The equipment generating the field is in the ship, so if the ship and the electron are separated the electron will no longer be subject to that field and will no longer precess. Exactly what does happen depends on the details of how you separate the two.
2) The equipment generating the field is not in the ship. In this case, the presence or absence of the ship is altogether irrelevant.
#### metastable
514
45
The electron is precessing because it is in a powerful magnetic field. There are two possibilities:
1) The equipment generating the field
So does this mean it would require a constant supply of energy to cause the electron to continuously change the vector of its magnetic moment?
• Posted
Replies
3
Views
2K
• Posted
Replies
14
Views
2K
• Posted
Replies
12
Views
2K
• Posted
Replies
10
Views
2K
• Posted
Replies
6
Views
1K
• Posted
Replies
13
Views
751
• Posted
Replies
5
Views
700
• Posted
Replies
15
Views
3K
### Physics Forums Values
We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6985211968421936, "perplexity": 408.1354611301573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987833089.90/warc/CC-MAIN-20191023094558-20191023122058-00273.warc.gz"}
|
https://www.gamedev.net/blogs/blog/883-the-animal-farm-gamedev-blog/?page=1&sortby=entry_last_update&sortdirection=desc
|
• entries
25
39
• views
47064
PC, 360, iPhone, and Android game development talk.
## Rocket Rascal Postmortem
[color=#282828][font=arial]My latest game, Rocket Rascal, was just released into the iOS and Android app stores.[/font][/color]
[color=#282828][font=arial](You can get it here for iOS and here for Android)[/font][/color] [color=#282828][font=arial]It's a little too early to tell whether the game was a "success," though it has definitely had a slow start. Instead, I just wanted to discuss some of the good and bad points that came along with development. [/font][/color] [color=#282828][font=arial]There were definitely a few major headaches and a few places I thought would be headaches that turned out blissfully pleasant. So here we go:[/font][/color] [color=#282828][font=arial]What Went Right[/font][/color]
[color=#282828][font=arial]Tumblr & TIG for Recruiting[/font][/color]
I found the artist I worked with off a random Tumblr post, and I couldn't be more pleased with our working relationship. I put out a public call for localizers on Tumblr and TIG and within a week found all the people I needed and then some. Not a single person disappointed, a real triumph when you're trying to wrangle 11 people on a small budget and you have no prior management experience. I also put a call out to integrate third-party game characters in the game, and a lot of Tumblr/TIG developers were very awesome to let me use their works. This created a lot of opportunities for cross-promotion and fan building.
(above: one of our featured characters, Carrie, by Ink Dragon Works)
Game Dev Meetups
I host a coworking event on meetup.com that allowed me to put my game in front of people regularly and get feedback. It also provided a consistent, reliable schedule for certain aspects of development. Further, there was never going to be an Android release - I don't have an Android device, and the one device I could get access to wouldn't run the game. Some people at a different meetup allowed me to see the game running on 4 different Android devices and convinced me that Android was a feasible target. So now it exists. Preproduction
This is such a boring point, because everyone says that prototyping is important, but Rocket Rascal really went through a large number of iterations before the core mechanics were sussed out. It went from a chaotic survival game more like Luftrausers to a portrait-style platform jumper to the rocket jumping vertical climber you see now. Preproduction took a little longer than I wanted, but I think the game is better for it. Here's an early screenshot: Handing Over Art Direction
This was always "my" project - I spawned the idea, ran the design, implemented everything, and managed people I needed. I knew very early that my eye for art design was less than stellar, and I sought a partner who could drive that. When I did find that person, I handed her a rough outline of the visual style I was targeting and let her run with that. She came up with the construction paper styling and the Beetlejuice-esque sandworm design and even the idea for different rocket launchers (the initial design called for multiple characters but only one rocket launcher). Had I run the art direction, things would've looked much different - and looking back, probably would've taken a lot longer and not been as good. [color=#282828][font=arial]What Went Wrong[/font][/color]
Performance
I didn't stay on top of my game's performance on actual devices, and it came back to haunt me. Near the end of development I found I had significant performance issues, and what turned into a 2-day time estimated turned into over a week plus occasional revisits. Next time I'm going to strive to stay on top of this with periodic performance checks built into my milestones. Budget
This was a small game that I self-financed, and I let my expenses get a bit out-of-control. I didn't take into account just how much it can cost to finance 9 localizations even when you're paying sub-standard rates. Will those pay for themselves? I don't know, but I'm getting skeptical - I know so far the vast majority of my downloads are in English. So why did I do this anyway? My last game was featured in the app store, but it was only really featured in America. I became convinced that if I had localized it during the time of the promotion, it would've picked up in other countries and been more successful. Next time I'm going to start with a much smaller batch of localizations and gradually add in more as necessary. VFX
I really wanted to go crazy with the VFX for this game. I wanted poppy visuals and really attention-grabbing effects. I tried to implement a bunch of stuff, but none of it worked - they didn't fit the game or they interfered with the play or they just weren't that good. Compound this with the fact that I don't have a very visual eye. It ended up being a lot of wasted time, and the things that made it in were not as profound as I wanted. Next time I'm going to strive to find someone else who can drive this and take my queues from them. What Went ... Somewhere in Between
Unity
I'm usually a Unity proponent, and for the most part development went fine. But somewhere near the end things turned pear-shaped. I updated to a patch release and started having real troubles keeping my project from breaking. There were days where Unity would freeze or crash multiple times an hour, and the Services tab gave me no end of grief. I guess it's my own fault for switching Unity versions during the tail end of development, but I really needed a few of those bug fixes. Localization Setup
Don't underestimate how much sheer time it takes to setup localization in iTunes Connect or Google Play, especially if you have a large number of achievements or in-app purchases. It's not hard, it's just... draining. [color=#282828][font=arial]Conclusion[/font][/color]
[color=#282828][font=arial]Like I said, it's too early to tell whether the game will be successful or not. The adoption rate thus far is small, and it hasn't been featured anywhere. I can't predict if any of that will change, but I'm still doing a heavy marketing push and looking for promotion venues. As any mobile developer (or any developer, really) knows, it's an uphill struggle.[/font][/color] [color=#282828][font=arial]Hopefully these lessons which are independent of the success of the game will be somewhat useful. I'll see a few weeks down the line whether the game's performance warrants a second postmortem. Until then, you ought go play the game maybe.[/font][/color]
## Rocket Rascal Optimizations
My latest game, Rocket Rascal, was just released into the iOS and Android app stores.
(You can get it here for iOS and here for Android)
I wanted to write a little bit about how I got the performance of this game to an acceptable level. For being such a casual game, I hit some serious performance issues late in the game. The iPhone 5 ran pretty much flawlessly, but the iPad 3 ran at 30-35 FPS, and the iPod Touch 4th Gen ran at 12-17 FPS. The 4th Gen was my low mark, and I wanted to get that up to at least 30 and the iPad 3 to at least 45. Problem 1: Garbage
If you develop in Unity for mobile, the garbage collector is going to crop up in your profiler at some point. The general rule: never generate garbage per-frame. This is harder than it sounds, since seemingly innocuous calls generate garbage. Things like gameObject.name generate garbage, and gameObject.GetComponent generates garbage only if the component doesn't exist. Converting numbers to text can also be obnoxious, especially if you have something like a "Score" string that changes frequently. This was mostly low-hanging fruit though - the Unity profiler made the problem points easy to identify and address. Problem 2: Fillrate
Have a look at the cityscape for the game:
There were 3 large images (*large* images) that were very fill-rate heavy. The resolution of iPad 3 is 2048 x 1536, and stacking these gets really expensive. I couldn't cut these out, they were critical to the aesthetic of the game, but they were definitely dragging me down. The solution here was to cut all those images up into two sections: the bottom portion didn't have alpha at all, whereas the top (smaller) portion still needed the alpha. Then I wrote dirt simple shaders to render this. Making the bottom portions opaque allowed them to be rendered much faste. This had the most pronounced impact on performance, and I quickly saw a 15 FPS boost on iPad 3. Problem 3: Colliders
Unity doesn't like having colliders manually moved around. It gets fussy, and performance starts to drag. The game uses almost exclusively hand-controlled colliders, however, so I had to do something there. I got a decent bump by just not updating the colliders of entities that weren't visible. It wasn't huge, but I couldn't exactly overhaul the game to address this. Problem 4: Sandworms
There are 3 sandworms that are constantly chasing the player, and sometimes there are extras in the background for aesthetics:
The sandworms were created by tweening the control points of a spline and then dynamically generating a mesh from that. Two subproblems arose from this: dynamically generating the mesh was expensive, and updating the collider was killer. Optimizing the mesh generation required just rolling up my sleeves and hammering on problem points the profiler picked up. There was only so much I could do here. I made a few loops take up less time and simplified how the curve was extruded, but ultimately didn't gain much. Optimizing the collider yielded significant boosts. Initially I was using a mesh collider for the snake, which was brutally expensive. Instead, I switched to approximating the shape with multiple capsule colliders. I also switched to only updating those colliders when the snakes were visible.
Problem 5: Device Hangups
The 4th Gen iPod just isn't that good a machine. There was an upper bar that I was never going to exceed. I turned off several features for the low-end devices. Motion blur had to go, and some of the background VFX get turned off dynamically. Dumb Luck
I reached a peak on the iPad 3 of between 45 & 55 FPS, leaning toward the low end. Then I updated Unity to the latest release and the average became closer to 55-60. So... that was nice. Conclusion
There were a bunch of other tiny things that needed tweaking, but the above had the most significant impact on performance. I managed to hit my iPad 3 target. 4th Gen iPod fell a little below the mark - it averages closer to 25 FPS, whereas I wanted 30, but I can live with that. It was definitely a *long* week of banging my head against a wall, so I'm very glad I was able to hit my performance targets.
## Alien Star Menace Post-Mortem
Preamble
[color=#444444][font='Helvetica Neue']Alien Star Menace is available now for iPhone & iPad on the Apple App Store and on Android in the Google Play Store.[/font][/color]
Background
[color=#444444][font='Helvetica Neue']Alien Star Menace started with a simple goal (if you're a dumb person) - make 7 small games in 7 days. There was some downtime at my day job, and I was itching to put another game out there; I hadn't released my own indie game since Cuddle Bears three years ago. So I browsed the net for some art assets I could use and jotted down some designs.[/font][/color]
[color=#444444][font='Helvetica Neue']The more I looked over the list, the more "strategy game in space" stuck out at me. I knew it wasn't a 1 day task. But it was interesting me more than all the other ideas I had. So I decided to throw out the original motivation and instead undergo a more ambitious project.[/font][/color]
[color=#444444][font='Helvetica Neue']I grabbed the art from Oryx Design Lab, prototyped some early gameplay in Unity, and Pixel Space Horror was born. Obviously, that name changed later in development. But we don't need to go into that.[/font][/color]
What Went Right
[color=#444444][font='Helvetica Neue']TIGSource Forums & Early Web Builds[/font][/color]
[color=#444444][font='Helvetica Neue']The initial design of the game had some fundamental flaws. The Action Point system - which was modeled after Hero Academy's - didn't work for this kind of strategy game. It encouraged standing still and letting enemies come to you. Units couldn't move or shoot through allies. Hallways were too narrow. All things which seemed OK as I was developing but which very much weren't.[/font][/color]
[color=#444444][font='Helvetica Neue']I was using Unity, so even though the game was targeting mobile devices, it wasn't terribly hard to push out a web build for public feedback. The folks at TIG immediately rallied against the game's obvious flaws. I fixed those issues, and through a steady stream of feedback polished up other pain points in the game.[/font][/color]
[color=#444444][font='Helvetica Neue']Flexible Milestones[/font][/color]
[color=#444444][font='Helvetica Neue']My mindset on milestones has always been to try to respect them vehemently. No new major features after alpha, certainly none after beta, Time leading up to an RC candidate should be all about bug fixing.[/font][/color]
[color=#444444][font='Helvetica Neue']I don't think I've ever worked on a game where these rules actually held, and Alien Star Menace was no exception.[/font][/color]
[color=#444444][font='Helvetica Neue']Case in point #1: I didn't have a finished title screen until a week before submission. I had some very pixel-art looking text. I wasn't even sure there *was* a title screen being worked on until the artist surprised me with it. What's there now is much, much better than what used to be there.[/font][/color]
[color=#444444][font='Helvetica Neue']Case in point #2: Two days before RC, I decided I absolutely hated the banner ads. They made the entire game look hideous, and they were almost certain to make no money. So I switched to Unity Ads - interstitials. Which were, by the way, scary easy to implement and have been giving pretty good return rates for the number of players I'm seeing[/font][/color]
[color=#444444][font='Helvetica Neue'].[/font][/color]
What Went Wrong
[color=#444444][font='Helvetica Neue']Content Heavy Design[/font][/color]
[color=#444444][font='Helvetica Neue']I've come to accept that level design is not a strong suit of mine, and it saps away energy like nothing else in game development. I gradually learned which things in my levels worked better than others, but it was hard learning and not terribly rewarding personally.[/font][/color]
[color=#444444][font='Helvetica Neue']I'm not unhappy with how the levels turned out, I actually think a lot of them work really well, but I think a talented level designer could've done better and had more fun doing it.[/font][/color]
[color=#444444][font='Helvetica Neue']Writing was also stressful. It was something I enjoyed initially - I like telling jokes and crafting stories. My enthusiasm came and went for this; I think I would've benefitted from having another person punch up the text some.[/font][/color]
[color=#444444][font='Helvetica Neue']Art Direction[/font][/color]
[color=#444444][font='Helvetica Neue']Sometimes I trick myself into thinking I have an artistic eye, and then I try to use it and quickly realize I was horribly, horribly wrong.[/font][/color]
[color=#444444][font='Helvetica Neue']I had a huge struggle trying to get the later levels to look good. Once you touch down on the alien planet (spoiler), the background changes - a starfield didn't make sense anymore. And I had no idea what to do.[/font][/color]
[color=#444444][font='Helvetica Neue']I hacked for days trying to get something that looked good. And I was never satisfied. I'm still not satisfied. I came up with a neat, creepy visual effect, but the backgrounds still feel flat overall[/font][/color]
[color=#444444][font='Helvetica Neue'].[/font][/color]
What's Undecided
[color=#444444][font='Helvetica Neue']I signed up for a bug tracking system over at Axosoft. I used it for about two days before I nixed it. Instead, I either fixed bugs as I went or jotted them down in a notepad. It might've helped if the project had more people, but for a one man game it didn't do much for me.[/font][/color]
[color=#444444][font='Helvetica Neue']Marketing[/font][/color]
[color=#444444][font='Helvetica Neue']I put together a pretty comprehensive Press Kit. I wrote over 160 e-mails/messages to various reviewers & YouTubers. I kept running dev logs on Tumblr, TIGSource, and GameDev (that last one not as frequently). I posted almost daily on Twitter and less frequently on Facebook. I talked to everyone I met about my game, including a few dates who couldn't have been less interested ;). I attempted community involvement wherever I could fit myself in.[/font][/color]
[color=#444444][font='Helvetica Neue']It's hard to gauge the impact this has all had. The initial launch has been slower than I'd like, but there's time for it to build. [/font][/color]
Conclusion
[color=#444444][font='Helvetica Neue']It's still a little early to determine if Alien Star Menace was a success or failure. I'm writing this while everything's fresh in my mind, and the game hasn't been out long enough to get a good impression of its performance.[/font][/color]
[color=#444444][font='Helvetica Neue']I'm pleased with how the game came out. It's my largest independent work, and in some regards my best. I think it brings a type of strategy game to mobile that was previously missing or underserved.[/font][/color]
[color=#444444][font='Helvetica Neue']I'd like to thank owmywrist for her constant support, testing, and for listening to my endless gamedev babbling, multitude-ofcasualities for her naming help and press kit advice, pythosart for her fantastic title screen art which I used in a ton of different unintended ways, ua86 for some really solid gameplay advice / feedback, and missmesmer for basically being my #1 Tumblr fan.[/font][/color]
[color=#444444][font='Helvetica Neue']I hope you enjoy the game, and I'd love to hear your feedback![/font][/color]
## Alien Star Menace is Out
My latest game was released! For both iOS and Android - I didn't intend to make an Android build, but it turned out to be way easier than I thought.
App Store
Press Kit
Feedback, comments, and ratings are welcome.
I'll probably write up a post-mortem in a few days. Currently heads down on marketing.
Here are some of the final screenshots that made it into the marketplaces:
## Promotional Materials
Alien Star Menace is in review with Apple right now and should hopefully be available for iOS devices in the next few days.
Here are various promo elements I've been putting together for marketing:
## Pixel Space Horror Demo 2
I've just released the second demo for my game Pixel Space Horror.
Play the Second Public-Ish Demo Here
I'm not going really wide until the alpha build, but I've made enough progress to push up another "kinda private" build for feedback. A lot of changes:
Action Point system has been entirely overhauled
You can now move & fire through allies.
Significant camera changes
15 levels & 12 units fully implemented and playable
Sound & music (though you may not be able to hear it in the web build
Proper level & unit unlocking and progress saving
AI improvements.
A million other tiny things.
CHEATS:
W - Win current battle
L - Lose current battle
R (in lobby) - Reset game
Known Issues:
The unit balance is still way off.
Level balance is also way off.
The Commander's special power isn't working right.
The enemy doctor doesn't behave correctly.
Any and all feedback is welcome.
## Pixel Space Horror Rebalancing
[color=rgb(68,68,68)][font='Helvetica Neue'] Rebalancing[/font][/color]
One of the criticisms of my pre-alpha demo of Pixel Space Horror was the balance - not so much the unit-to-unit balance (which was and still is broken), but the overall game balance. The action point (AP) system encouraged sending in a single unit, doing as much damage with him as possible, and then either pulling him out or watching him die.
I've made a number of changes to the balance to address this and other issues:
The largest change was to completely change the action point system. Instead of 5 global points that can be used however the player wishes, we now have 2 points per unit. A unit can move twice, attack twice, or move and attack.
[color=rgb(68,68,68)][font='Helvetica Neue'][font=inherit][size=1][/font][/font][/color]
When a unit can't be moved again that turn, I darken it to make this clear to the player. I'm not thrilled with the UI - it takes up too much room and isn't especially pretty - but I haven't found another alternative. (I don't show the UI for the enemies).
This had the side effect of making enemy turns take too long. A map with 10 enemies would take 4x longer per turn (20 AP versus 5). To counter this, the enemies can only ever use up to 10 AP max. It means some enemies won't move every turn, but it keeps battles faster. It also gives me a dial to turn if I want to add harder difficulties.
The next (and simplest) thing I did was rescale all the numbers. Instead of hit points in the range 2-10, they're now in the range 20-100. Attack has been scaled appropriately. This gives me more room to play with numbers, which is especially important for my splash damage units.
The final task was to allow units to move & shoot through allies. This was a constant complaint, and there was no real design thought that went into it prior - I just let the pathfinder do its work naturally. But most strategy games allow moving through allies for a number of reasons: it allows for more strategies, and it awkwardly confines units less. You still can't move through enemies, and only a few attacks allow shooting past them.
I think these addressed most of the concerns, and the game does feel better as a result. The balance is still horribly broken, but that'll be fixed as I playtest more.
## Pixel Space Horror Levels
During this phase of development for Pixel Space Horror, I'm focusing on content. Specifically, I'm trying to get the first 15 levels roughed in - not final, definitely not polished, but playable.
I'll keep this update short on words and instead just show you:
How am I going to rationalize a graveyard on a spaceship? I'm not.
## Pixel Space Horror Units & Enemies
I spent most of the weekend adding the vast majority of units & enemies for Pixel Space Horror that will be seen in the game. The majority of the implementation details were already there: the game already supported most of the attack types & buffs that units would provide.
I created a simple test level to make sure everything was behaving - that units were attacking and moving correctly and nothing exploded.
So here it is, the full cast of Pixel Space Horror:
The units aren't done-done. Visually, most of the attacks use stubs. I want to add flashy VFX (particle systems, camera fun, full-screen effects) to make the game fun to watch. Not quite Atlus-level. I don't have those resources, and also I don't want to slow the game down the way Atlus does in their games. The stats need heavy tweaking - heavy, heavy tweaking after a lot of playtesting - and the writing for them isn't final.
## Pixel Space Horror Pre-Alpha Demo
Update 4: Pre-Alpha Demo
[color=rgb(0,0,0)][font=Verdana][background=rgb(252,252,252)]I wanted to put together a small web build, mostly for friends to evaluate. If you want to try, [/background] [/font][/color]here's the link.[color=rgb(0,0,0)][font=Verdana][background=rgb(252,252,252)] I'm not putting it up in Playtesting yet - it's a little early to get a huge number of eyes on it. Feel free to leave any comments/feedback. Pointing out bugs isn't as useful at this stage, as it has plenty.[/background] [/font][/color]
[color=rgb(0,0,0)][font=Verdana][background=rgb(252,252,252)]My goal with this demo was to just make a small slice of representative gameplay. To that end, there are:[/background] [/font][/color]
5 levels
5 available units
A handful of different enemies
Stub UI - this definitely isn't anywhere near complete
[color=rgb(0,0,0)][font=Verdana][background=rgb(252,252,252)]None of the numbers/unit stats are final, and the levels definitely still need tweaking. The "story" will get edited, but the jokes in there are pretty close to what the game's humor will be like. The UI flow will be similar, but I've scheduled in a lot of time for heavy polish.[/background] [/font][/color]
[color=rgb(0,0,0)][font=Verdana][background=rgb(252,252,252)]Features not even remotely represented:[/background] [/font][/color]
Level ratings - you'll get stars based on how well you do
Multiple win conditions - not every mission requires you just kill everything
[color=rgb(0,0,0)][font=Verdana][background=rgb(252,252,252)]So far I've only put it in front of one other person. She tells me there are some crash bugs (I have yet to see them myself). She also tells me it's pretty easy. It is only the first 5 levels though, so being pretty easy is kinda the point. Oh, and please forgive the font. I'll be changing that ASAP.[/background] [/font][/color]
[color=rgb(0,0,0)][font=Verdana][background=rgb(252,252,252)]Anyway, lots of forward progress! I'll probably have an alpha build before the month's end which I'll push out to more people.[/background] [/font][/color]
## Pixel Space Horror UI Update
Most of the last few days has been spent putting together some necessary UI elements. I'm trying to keep the game pretty UI-lite, but there's no avoiding some things.
First, the level selection screen or "lobby." Right now we're assuming all the levels are unlocked, which is definitely not the start state. There are some graphical flares that don't get represented in a screenshot: stars randomly twinkle and the moons rotate around their parent planets.
Then there's the squad selection screen, where the player selects which units will go into combat. Will probably need a bit of text to tell the player what to do. But it's fully functional - you can drag units into the squad and they will be represented in battle!
And finally, the basic cinematic text dialog. Most of the story is going to be represented by one or two sentences at the beginning/end of each mission. This isn't an RPG - it's a tactics game, and the story is not a driving focus.
It all needs a little love. More flares, tweaked layout, definitely a better font. But it's functional, which right now is the most important part!
Pixel Space Horror Tumblr
## The AI of Pixel Space Horror
Background
Pixel Space Horror (PSH) is a very streamlined strategy game: units can only move and attack. For movement, they have a fixed range they can move. Attacks are more varied - every unit has some special qualities when it attacks, such as whether attacks do explosion damage or whether they require line of sight.
Being turn-based, the AI doesn't have the complications of many other games: we have plenty of time to calculate pathfinding and decisions without the player noticing any lag. We don't have to rely on as much estimation and can take a more complete view of the scene. Further, we don't have to respond in real-time to a changing game - the player can't move out of the way mid-action.
However, being a strategy game, the AI comes under extra scrutiny: it has to make solid tactical decisions to wipe out an enemy team. Everything it does can be seen at all times, and if it does something stupid, players will immediately notice.
Goals
The goals for the AI system were the following:
Don't be stupid - enemies don't need to think 10 moves ahead, but they can't stupidly move back and forth every turn.
Don't be predictable - the game should play differently each time, and the enemies should respond differently to the exact same situations.
Don't be boring - if there's one over-powered enemy, that enemy should not move every time even though it may make tactical sense.
Pathfinding
The pathfinding is crazy-simple, but no AI article would be complete without mentioning it. PSH doesn't have the real-time, large-scale needs of many games: we have plenty of time to consider movement possibilities, and the slow speed of the units makes the decision space pretty small.
PSH uses a simple breadth-first search to evaluate all the different spaces a unit can walk. It starts at the unit and evaluates every neighboring space for walkability. Once all the neighboring spaces have been evaluated, it then repeats the process starting at those neighboring spaces, and then repeats that process again until it has reached the unit's maximum distance and/or exhausted all walkable spaces.
This produces paths which are good enough - they may not be quite the most natural paths, but for our purposes the player won't notice.
Decision Making
PSH uses a brute-force mechanism to determine the next action. It evaluates every possible action for every unit and uses some priority-based decision making rules to determine which action to take next.
There are three varieties of actions for the AI system to consider:
Move
Attack
Move + Attack
Strictly speaking, Move + Attack is two actions, but we group them because it often looks more natural for a unit to move and then attack immediately after (it looks like the unit moved with the intention of attacking).
We evaluate an action based on the following parameters:
Does this action kill anyone? These actions take the highest priority. If an obvious kill is left dangling, players will notice.
How long has it been since the unit last moved? We want to spread out actions over all the units so the system doesn't get boring.
Does this action involve an attack? Actions that hurt the opponent take priority over actions that are simple movements.
If this action involves an attack, how much total damage does it do? Attacks which do a lot of damage to multiple units take priority over attacks which only hit one unit for a little.
If the action involves an attack, what kind of units are we attacking? We may want to hurt healers more than other units.
How close will this bring the unit to the enemies?
Each of these criteria is weighted so that some are more important than others. If the unit just moved, it might move a second time in a row if that move can do a lot of damage. If it has moved three times in a row, the odds are unlikely it'll move again even if it could do a lot of damage.
We add up all those weights to calculate a priority and then pick an action based on that priority. We don't necessarily pick the action with the highest priority - instead, we randomly pick an action, but the actions with the higher priorities are more likely to be selected. Obviously non-ideal actions can still happen, but it's more likely the system will pick something better. This gives us our unpredictability.
Results and Future Work
The system isn't yet perfect. There are edge cases that aren't being handled gracefully. For instance, if there is only a single unit left to attack, sometimes the system thinks it's more valuable to just move away than it does to attack. This leaves it possible for the player to gain victory over an obvious defeat. Some of this can be fixed by tweaking the weights on all our criteria, but some special handling will be required to be safe.
We also don't yet account for special units - units which heal or which apply stat bonuses. These will introduce new edge cases and new weighting considerations.
Still, the AI is producing reasonable results. It's moving and attacking and killing left and right. It's taking reasonable actions. I can beat it every time (some of this is the product of immature level design), but it usually takes down several of my units on the way. I'm happy thus far with the results.
## Cuddle Bears Gets Update, Free Version
I haven't posted here in a while. Most of that has been because my side-projects were completely buried under the weight of my job project (go play Lil' Birds for iPhone/iPad !), and I don't really have the authority/clearance to speak publicly about my work, leaving me without content suitable for a GameDev journal entry.
Work has settled for me a bit though, so I've had some opportunity to revisit my personal work.
You might remember Cuddle Bears. You really should only remember it if you have amazing memory or you're way more interested in me than you ought be, but you might. There's always been an update planned, and I finally gained the free time to execute on it.
So it's out now. It adds a fair bit to the game - most notably, unlockable score multipliers to give players incentive to continually play.
The game's free version was also released. It has complete feature parity with the paid version, with the only change being that some of the menus now have ads at the bottom.
It's my first experiment into ad-driven games, dipping my toes in to see how they perform. Cuddle Bears is by far my most downloaded side project, so I figured it was a good candidate, and I'm not sold on the efficacy of trial versions for iOS games.
It's interesting to look at some of the differences between how the two are doing:
Those are just some initial thoughts. It's not a lot of data to work with yet - I'll put up an updated post after some time has passed to evaluate how progress is going.
In the meantime, you should definitely get the game and leave a rating. Either version, I won't be picky. ;-)
## Cuddle Bears is Out for iPhone, iPod Touch & iPad!
Our latest game, Cuddle Bears, is now available for iPhone, iPod Touch, & iPad. Here are some links for your pleasure:
The game is free for a limited time, so feel free to try it out. If you're a member of the gaming press & would like to review it, by all means get in touch. If you'd like to chat with any of the developers, we'd be happy to talk with you.
Here's the game's description:
Run from the Big Baddie Wolf through the land of Cuddle Bears. Cuddle Bears love to cuddle, but their little hugs slow you down. Luckily for you the wolf has a taste for bear. Bop the Cuddle Bears on the head to stun them and slow the wolf down while he devours the little guys. But watch out! Murdering Cuddle Bears may have consequences.
HIGHLIGHTS:
- Jet Packs! Trucks!
- Hop from bear to bear and earn huge score multipliers
- Time your Perfect Jumps and go sailing through the air
- 20 Game Center achievements to unlock
And a promo poster!
## iPhone Development Pitfalls
These are fresh in my mind, so I thought I'd put together a quick blog post:
## Thoughts on Android NDK
In my spare spare time (that is, my time not dedicated to my day job or an active side project), I've been hacking with porting Word Duelist from iPhone to the Android NDK. I wrote my core iPhone code in C++ without STL specifically to facilitate this. Of course, then Google announced the newer versions of the NDK would have full STL support, but that is neither here nor there.
Here are some of my experiences in no particular order:
Working with the NDK hasn't been completely painless, but it's not the worst platform I've worked with. I don't have a timeline for Word Duelist's release there, there's still a fair bit of debugging ahead, but if progress moves the way it has I expect a month or two.
## Digging Into my Toolbox
I've spent the better part of my professional game development career developing tools. All the same, when I'm involved in a personal project, I have 0 interest in writing a tool. I much prefer the free (or cheap) stuff off the internet. Here's what I've been using lately:
BMFont
I've done a fair bit of searching for bitmap font generators, and the best one I've found - either free or paid - is BMFont. It supports every locale I've encountered and includes the option to only generate characters from a list (useful when you're making bitmap fonts for Japanese or Chinese). It gives a good bit of control over the generated textures and multiple export options. Rendering using the generated information is also pretty trivial (there's a forum post floating around that outlines pretty much the exact code).
CrossOver
## Word Duelist is Near Release ...Help?
Word Duelist has been submitted to iTunes Connect and is now "Awaiting Review." Based on how long my previous games have taken, I would expect it to be through review in about a week. Hoping it passes.
The end cycles were brutal.
Even with my independent development (probably moreso, actually), I try to keep to rigid end cycles. Those cycles being:
Alpha - Every feature is complete and a majority of the content.
Beta - Content complete. The game has been polished & tested thoroughly, and all major bugs are fixed, though there are still probably some hiding.
RC - The game has 0 bugs... ideally. Usually RC requires a few passes before shipping, though.
I'm less concerned about the stuff leading up to alpha - my preproduction is pretty loose, and I don't keep to milestones like First Playable or Second Playable. This has worked fine with games where development involves less than 3 people over the course of a few months (and when I say 'few months', I'm talking about the few hours a day I can spare between work and life).
I'm super glad I keep those end cycles and regular testing, because I'd release some extremely buggy games without them. Word Duelist had a bug that would lock up the entire game as late as RC2 - a rare edge case that few people would encounter, but if Apple had encountered it I'd have wasted a week in review.
Anyway, that's that.
So here's why my title includes "...Help?" : I'm rubbish at marketing and could use some advice.
I sent a press package for See the Light to about 25 sites. It was fairly competent, I thought - included descriptions about the game, a description of the "company", assorted screenshots/promo images, and a free download code to play the full game. Of those sites, 1 reviewed the game. 3 sent me canned responses offering to sell me a review and advertising. I never heard from the other 21.
I'll grant that See the Light was a niche game (raking in, so far, about $12), but that still seems askew. Word Duelists''s appeal is broader - I think - and I'd like to see it do better in the market. I have no illusions of buying a retirement home with it, but ideally I'd like to cross the$150 threshold where Apple actually pays out.
I figure there are a lot of extremely knowledgeable folk around here, so if you have any ideas on how I could effectively market/promote this, I'd love to hear them.
## Introductions are in Order
I realized that after two content heavy posts, I hadn't actually made the obligatory "hi" post. Plus the install time for my current project is monumental, giving me time to kill. So here's who I am and a discussion on my motivations with this blog:
I'm Brian. Currently an engineer at Spark Plug Games, where I have developed Pac-Match Party and am Lead Engineer on the Puzzle Quest 2 iPhone/iPad project. Whether or not I'm actually qualified to be a lead anything is a question nobody seems to be asking, and I'm definitely not answering. At any rate, both of those have shipped, so go buy them.
Before Spark Plug I spent about 1.5 years at Emergent Game Technologies as a Tools & Framework Engineer on Gamebryo. I touched a bunch of different areas there, mostly art tools. Then bad things happened to the company, and I don't know any more than anyone else reading the GD.net Dailies.
Before *that* I was an intern at Electronic Arts Chicago... to which bad things also happened shortly after my internship ended. And before that I was a debugger/porter at CSP Mobile... which the internet tells me has gone bankrupt.
Bit of a trend there. I try not to think about it much.
All this has been punctuated with stints as a grad student doing various amounts of teaching and tutoring and university level software engineering. I also keep a healthy number of side projects going, developing for XBLIG and iOS and Android. GameDev tells me I've been a registered user since 2000. Yikes I'm old.
So that's me. And here's where I'm going with this blog:
Basically I want a place to babble about purely game development related technical/design ideas. I already keep a separate blog (The Animal Farm Creations) that showcases some work and has plenty of geeky technical posts, but it's more personal and not entirely targeting fellow developers. That's what I'm hoping to do here.
I make no warranties about the quality or interestingness of my posts. I also don't guarantee I won't make up words like interestingness. These are risks you take as a reader.
## Scripting Languages are Overrated
Back when I was first starting ambitious game dev projects, I believed a good scripting system solved a lot of problems:
(1) It appeared super modular. After all, I could keep so much game logic outside of my code and only use C++ for engine development.
(2) Rapid iteration! Who doesn't want to change a logic file and instantly see the results on screen?
(3) Everyone can understand scripting languages. I mean, c'mon, Lua is the most readable thing ever.
I went down the scripting road, rolling my own, integrating outside languages, and bragging about how awesome my code was. Except that it wasn't - pretty much none of the above was actually true. And there were hidden costs I wasn't seeing.
Since then I've had the opportunity to mature a bit, and I've worked on large code bases with varying amounts of scripting in place. This has caused me to reevaluate the above.
(1) was true enough, I guess, except where you have to worry about binding. Where scripting languages are involved, a healthy portion of engine code must be dedicated to getting the engine & scripts to communicate. Scripts aren't much good if they don't know about game objects after all. This is less true with languages like C# where reflection can automagically handle a lot of the binding, but if you've ever looked at code that binds Lua & C++ you know it's a mess. Plus in the end, even if we discount binding, the scripts aren't any more modular than, say, just having separate source files in their place.
(2) is incredibly tempting, but it comes at a non-significant cost in development time. Rapid iteration isn't free, and depending on the game's complexity, can be prohibitively difficult. That's just for PC development - if you want rapid iteration over an iPhone or an Xbox, you're looking at a whole new set of challenges. It's great if you have the time to develop and maintain it, but it's a lot of work.
(3) completely depends on your people; Lua and Python are a little easier on the eyes to those who don't code all day, but in the end that's mostly syntax and semantics - the real challenges of coding are more about problem solving. Odds are if you have people who can write good Lua code, they could write the equivalent in C++ with a little added extra education (you may want to leave out pointers/dynamic memory, though). More importantly, a lot of people *don't* write good script code, and so you're looking forward to a bright future of hearing "Hey Engineer, come help!"
Those are rebuttals to the initial three points, but I also mentioned hidden costs associated with scripting.
(1) The number one thing most engine integrated scripting systems lack is a good visual debugger, and that's a crime. By far the largest boon of modern development environments (ie Visual Studio) is a quality debugger. I used to tell my operating systems class that they'd be spending upward of 70% of their time debugging problems, and I think that's true with a lot of development (especially at the end cycles). With a large group of game scripting systems, the best you can hope for is a stack trace. Helpful, but not a lot. Willfully throwing away a debugger is crazy talk. Of course, you can develop an integrated debugger, but that's tricky business.
(2) You're moving a lot of errors to be run time checks instead of compile time, and that time adds up, especially if you're developing for a system where the turnaround time between changing something and reinstalling the game is non-zero (ie: iPhone). I can't count how many times I've made a silly Lua mistake over the last couple months only to slap my forehead and have to restart the program. Depending on your system, the error may not even be immediately evident. If scripting errors cause a soft fail + error log versus a hard break, you could overlook a vital error.
So that's that.
The above might give you the impression that I hate scripting, which isn't completely accurate. If you have the time to create a proper rapid iteration solution and a real-time visual debugger, scripting can be pretty awesome. Even without that, confining scripting to small chunks of logic that can easily be made iteration friendly (ie: spell effects) has proven to save me a lot of time. But unless it can be done right, using scripting to drive game logic can often involve more work than it saves.
## Data Driven UI
Prologue
For one of my first XNA games, I decided to hand code my UI. This left me with a ton of statements like this:
StaticImage img = new StaticImage(someImage);
img.Position = (50,50)
img.CenterHorizontal = true
...
I thought this would be OK, since the project was intended to be small and essentially over upon release. Then I decided to port it to the iPhone; when I looked over the code again, I threw myself on my sword.
Shameful.
The UI code made my codebase a bloated, cluttered mess. It was hard to port correctly (not to mention hard to develop initially), and it cropped up all over the place. The UI animations added even more. In the end, I think more of the code was devoted to UI than to actual gameplay.
I vowed right then and there - never again. I got to work creating a new UI system that would allow me to data drive everything with minimal code work.
Development
Here were some of the requirements:
(1) No UI should need to be created in game code aside from a "LoadWindow" call.
(2) Creating custom UI code should require as little typing as possible - reasonable defaults should be used for virtually everything that wasn't explicitly specified.
(3) I should be able to specify simple animation sequences. Fades, Slides, Rotations, etc.
(4) This should interface with string table support in case I want to do any future localization.
(5) The UI files should be human readable. Not necessarily XML, but definitely not binary.
(6) The system should support relative, auto-adjusting layouts. If I ever want to move to a different platform/resolution, I don't want to have to recreate everything. Full UI portability is not 100% feasible, but every little bit helps.
Most of these problems have pretty straight-forward solutions. The actual widgets are defined in code, and a factory parses an XML-esque file to instantiate them. Here's an example:
StaticImage
Name: image_dolly
Image: background
Position: 50,50
/StaticImage
Label
Name: label_title
Text: "Hello World"
FontName: Vera
/Label
Animation
/Animation
When I call LoadWindow on this file, it loads up all the widgets and animations. I can then make a call to "PlayAnimation" to actually play the animation.
That's the high level look. Now let's take a look at how the system was built to fulfill the above mentioned requirements:
(1) No UI should need to be created in game code aside from a "LoadWindow" call.
This should be fairly self-explanatory. The UI loader knows how to interpret all the possible widgets (all the ones I've implemented so far) and any parameters that need to be set on them like position and color. Sometimes writing the UI file can end up more verbose (I don't support looping to create groups of widgets), but generally the files trend smaller than the equivalent code that would otherwise be necessary.
(2) Creating custom UI code should require as little typing as possible - reasonable defaults should be used for virtually everything that wasn't explicitly specified.
This is also straight-forward. If something isn't specified in the UI file, a default is used. Conversely, I have asserts in place for where parameters are required and not specified (for instance, the Image parameter of a StaticImage). The defaults generally make sense - centering is off by default, a default system font is created for strings, the default color is white, and so on.
(3) I should be able to specify simple animation sequences. Fades, Slides, Rotations, etc.
There was a little work involved in this one.
First, the system recognizes two types of animations: Parallel Animations (default) which are animations where all the steps run in parallel - ie: fading all the widgets in at the same time. And there are Chain Animations, which happen in sequence - moving a widget to one location and then to the next and then to the next.
Each animation can have a series of operations - fades, slides, stalls, and rotations right now. Each of those operations applies to a single widget. So an operation can slide a widget from off-screen to the center of the screen as part of a larger animation. For convenience sake, I also define an "all" widget, which tells my UI system to apply that animation to every widget in the file - useful for full-screen fades.
An animation operation can also be another animation. So if I have a Chain Animation named slide_in_out, I can reference that animation from within a Parallel Animation and thus have the chain occur while other things are going on.
There are lots of little nuances that go into this - animations can loop, for instance; operations can have easing associated with them for more interesting transitions; operations can also show widgets once they begin or hide widgets once they end.
Finally, you may be wondering what the * characters in my example represent. They basically represent the value of the associated parameter when that widget was initially created. For instance, in the above example, *,* gets interpreted as 50,50 when the animation executes. This is a convenience token that allows me to move widgets around without finding every animation that references the widgets and changing those as well.
(4) This should interface with string table support in case I want to do any future localization.
Whenever a string is read within the UI file - for example, for labels - the underlying code first looks to see if that string exists within a master string table. If not, it logs a message and uses the string as-is. This allows the code to run unhindered but also lets me know where my string table is lacking.
Unfortunately, right now the system does not have good support for formatted strings. So I can't put a string in my table that reads "Player 1 has {1} points" and have the system resolve {1} automagically. That requires code.
(5) The UI files should be human readable. Not necessarily XML, but definitely not binary.
You'll notice that the above example looks XML-esque while not actually being XML. There's no dark secret here; I had some file loading code in my codebase but no XML loaders (Objective-C is not as nice about this sort of thing as C#). I opted for the lazy route. I could just as easily put tags around everything but opted not to.
(6) The system should support relative, auto-adjusting layouts. If I ever want to move to a different platform/resolution, I don't want to have to recreate everything. Full UI portability is not 100% feasible, but every little bit helps.
This is honestly where my system is the weakest. WPF has all sorts of nice little grid layouts and tables and what-not that will automatically rearrange/resize widgets. Getting that to work well would require more time than I wanted to devote, so I opted for a lazy route:
I have two variables: res and halfres. When they are seen for a vector2 parameter, they are translated to, well, the screen's resolution or half the screen's resolution respectively. This makes doing things like centering or sizing a little easier.
Epilogue
Obviously the system has holes. If I had more time to devote, I'd work on some nice auto-layout features. I might also add a variable system to specify UI global variables (ie: a common "down color" for text buttons). Sound integration is on the TODO list. The biggest thing I want to do, though, is write a nice layout tool so I don't have to hand-write these files: they're 150% better than writing the associated code but still time consuming.
Regardless, the system is a big step up from where I was. It's made my iPhone development much less painful and allows me to focus less of my time on nice UI effects and more on the actual gameplay.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17060330510139465, "perplexity": 2076.2164488163903}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986726836.64/warc/CC-MAIN-20191020210506-20191020234006-00534.warc.gz"}
|
http://physicshelpforum.com/special-general-relativity/15119-acceleration-contradiction.html
|
Special and General Relativity Special and General Relativity Physics Help Forum
Feb 23rd 2019, 08:24 PM #1 Junior Member Join Date: Feb 2019 Posts: 7 Acceleration contradiction? Let there be three inertial frames, F0, F1 and F2. F1 is moving in the negative x direction relative to F0 with speed V = sqrt(3)/2 * c. F2 is moving in the positive x direction relative to F0 with speed V = sqrt(3)/2 * c. In frame F1 there are two BB pellets at rest spaced 10 meters apart as measured in F1. F0, per Einstein, measures the separation between the two BB pellets to be 5 meters. At time t0 in frame F0, frame F0 simultaneously starts both BB pellets accelerating in the positive x direction. The two BB pellets each accelerate in the identical pattern until they have zero velocity relative to frame F2 at which time F0 simultaneously stops the accelerations. An observer travels with one of the BB pellets accelerating identically as the adjacent BB pellet accelerates. That observer measures that the two BB pellets were initially 10 meters apart. When the BB pellets and the traveling observer have zero velocity with respect to frame F0, the traveling observer measures the BB pellets to be only 5 meters apart, just as every other observer in frame F0 does. When the BB pellets and traveling observer reach frame F2 and stop accelerating, the traveling observer once again measures the separation between the two BB pellets to be 10 meters. Since the acceleration is always in the positive x direction, and since the pattern is identical for the two BB pellets (we can say make the acceleration constant as measured by the traveling observer), why does the accelerating observer say that the identical acceleration in the positive direction sometime causes the two BB pellets to move toward each other (the first part of the journey) and sometimes causes the two BB pellets to move away from each other (the second part of the journey)? Thanks David Seppala Bastrop TX
Feb 25th 2019, 06:47 AM #2 Senior Member Join Date: Jun 2016 Location: England Posts: 845 You are hopping between reference frames in a carefree manner! One always has to be very careful when moving between reference frames that all relevant terms are considered. I haven't had time (yet) to trace your arguments slowly and carefully through each of your jumps, so I can't comment (yet) on the correctness of your conclusions. __________________ ~\o/~
Feb 26th 2019, 04:45 AM #3 Senior Member Join Date: Jun 2016 Location: England Posts: 845 It was a little confusing because you have a miss-typing in your post: The pellets start accelerating from Frame F1 (not F0 as your post seems to indicate). The Pellets are now in their own (accelerating) frame (FP). At all times the observer in Frame FP will observe the pellets to be 10m apart. But observers in other frames will observe the distance between the pellets to change as their velocities relative to FP change. __________________ ~\o/~
Feb 26th 2019, 05:16 AM #4 Junior Member Join Date: Feb 2019 Posts: 7 Woody wrote: "The Pellets are now in their own (accelerating) frame (FP). At all times the observer in Frame FP will observe the pellets to be 10m apart." No, your statement is incorrect. When the accelerating frame FP has zero velocity with respect to frame F0 they measure the same distance between the BB pellets as all other observers in frame F0 do. All F0 observers measure the distance between the BB pellets to be 5 meters from the start to the finish of the acceleration since frame F0 measures the initial distance between the BB pellets to be 5 meters (per Einstein), and the acceleration of each BB pellet started simultaneously from frame F0 point of view. David Seppala Bastrop TX
Feb 26th 2019, 06:04 AM #5
Senior Member
Join Date: Jun 2016
Location: England
Posts: 845
When the accelerating frame FP has zero velocity with respect to frame F0 they measure the same distance between the BB pellets as all other observers in frame F0 do.
Agreed.
All F0 observers measure the distance between the BB pellets to be 5 meters from the start to the finish of the acceleration since frame F0 measures the initial distance between the BB pellets to be 5 meters (per Einstein), and the acceleration of each BB pellet started simultaneously from frame F0 point of view.
Wrong.
The whole point of the reference frames is that all objects (and observers) in that frame are moving at the same speed.
Any observations between two (or more) frames (moving a different speeds)
will produce differences to the same observations taken within a single frame.
The pellets (FP) are initially in Frame F1 but are accelerating through Frame F0 (and eventually on to Frame F2).
When they are in Frame F1, observers in F1 will see them 10m apart,
observers in F0 will see them as being 5m apart.
When the pellets accelerate enough to reach Frame F0,
observers in F0 will see them as being 10m apart,
but the observers in F1 will now see them as being 5m apart!
The observers in FP will always see them as 10m apart.
__________________
~\o/~
Last edited by Woody; Feb 26th 2019 at 06:11 AM.
Feb 26th 2019, 07:23 AM #6 Junior Member Join Date: Feb 2019 Posts: 7 As stated in the original post, F0 measures the distance between the pellets to be 5 meters when they are in F1. F0 starts the acceleration of each pellet simultaneously, and each pellet accelerates in the identical pattern. Therefore, F0 always measures the distance between the two pellets to be 5 meters through out the entire journey. With identical accelerations and with each acceleration starting simultaneously as measured in frame F0, the distance between pellets as measured in F0 never changes. David Seppala Bastrop TX
Thread Tools Display Modes Linear Mode
Similar Physics Forum Discussions Thread Thread Starter Forum Replies Last Post magnetismman Advanced Electricity and Magnetism 2 Apr 8th 2014 01:42 PM ginarific Periodic and Circular Motion 1 Jan 31st 2010 07:43 AM
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8392706513404846, "perplexity": 2157.3825336642494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257699.34/warc/CC-MAIN-20190524164533-20190524190533-00318.warc.gz"}
|
https://quantumtinkerer.tudelft.nl/blog/
|
# In the footsteps of Einstein
I teach the undergraduate solid state physics course, where we just switched to a shiny new book "Oxford Solid State Basics" by Steve Simon.
Steve's story of condensed matter physics starts with the heat capacity of solid materials. It's a great way to dive into how quantum mechanics combines with lucky guesses to improve our understanding of what is happening. It is also what we do in our course.
A great source of experimental data showing the problem is Einstein's original work, and Steve's book reproduces the plot from Einstein. (See also the English translation) Unfortunately that plot belongs to the current publisher of Annalen der Physik and cannot be republished under a free license. So in order to provide this data in the lecture notes and to make it available to whoever wants, I decided to take the original data Einstein has and repeat the exercise. Because we are living in an enlightened age, I also wanted to see if the more advanced Debye model would be any better for Einstein's data.
# Machine learning analysis of scientific articles
Neural networks have an advantage compared to humans because they have access to a much larger body of information. This is why what looks like random noise to a human, after correct processing by a machine learning algorithm turns out to be a signature of Higgs boson.
While analysing physics data with machine learning is definitely a great direction of research, another intriguing possibility is trying to infer what the researchers themselves think. To make an example, the domain of sentiment analysis tries to not only extract the information contained in the text, but also the attitude the author has about this information.
Last semester Anton was lecturing on the undergraduate Solid State Physics course at TU Delft. The course lasted several weeks, and each week there was a mini exam that students on the course could take for partial credit. This was a big course with 200 participants, and the prospect of having to manually grade 200 exam manuscripts every week was not something that anyone on the course team was looking forward to.
I wrote a column for the newsletter of our institute. Since I liked the result, I'm also reposting it below.
As a child I had a book "Bad advice" that contained nothing but poems suggesting you to do what you should really never do. So here is my bad professional advice (except that I won't risk making poetry):
Always remember: …
# Connecting the dots
### Why do spectrum plots look ugly?¶
Very often when we compute the spectrum of a Hamiltonian over a finite grid of parameter values, we cannot resolve whether crossings are avoided or not. Further if we only compute a part of the spectrum using e.g. a sparse diagonalization routine, we fail to find a proper sequence of levels.
Let us illustrate these two failure modes.
# Just some initialization
%matplotlib inline
import numpy as np
from scipy import linalg
from scipy.optimize import linear_sum_assignment
import matplotlib
from matplotlib import pyplot
matplotlib.rcParams['figure.figsize'] = (8, 6)
def ham(n):
"""A random matrix from a Gaussian Unitary Ensemble."""
h = np.random.randn(n, n) + 1j*np.random.randn(n, n)
h += h.T.conj()
return h
def bad_ham(x, alpha1=.2, alpha2=.0001, n=10, seed=0):
"""A messy Hamiltonian with a bunch of crossings."""
np.random.seed(seed)
h1, h2, h3 = ham(n), ham(n), ham(n)
a1, a2 = alpha1 * ham(2*n), alpha2 * ham(3*n) * (1 + 0.1*x)
a2[:2*n, :2*n] += a1
a2[:n, :n] += h1 * (1 - x)
a2[n:2*n, n:2*n] += h2 * x
a2[-n:, -n:] += h3 * (x - .5)
return a2
xvals = np.linspace(0, 1)
data = [linalg.eigvalsh(bad_ham(x)) for x in xvals]
pyplot.plot(data)
pyplot.ylim(-2.5, 2.5);
This is mock data produced by a random Hamiltonian with a bunch of crossings. We know that some of these apparent avoided crossings are too tiny to resolve, and should instead be classified as real crossings.
Let's now simulate what would happen if we also use sparse diagonalization to obtain some number of eigenvalues closest to 0.
truncated = [sorted(i[np.argsort(abs(i))[:13]]) for i in data]
pyplot.plot(truncated);
The ugly jumps are not real, they appear merely because some levels exit our window and new ones enter.
A desperate person who needs results right now at this point replots the data using a scatterplot.
pyplot.plot(truncated, '.');
This is OK, but at the points where the lines are dense our eye identifies vertical lines, making the plot harder to interpret.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39315325021743774, "perplexity": 1534.0461979279785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948125.20/warc/CC-MAIN-20180426090041-20180426110041-00617.warc.gz"}
|
http://encyclopedia.kids.net.au/page/hi/Highway_engineering
|
## Encyclopedia > Highway engineering
Article Content
# Highway engineering
Highway engineering is the process of design and construction of efficient and safe highways and roads. It became prominent in the 20th century and has its roots in the discipline of civil engineering. Standards of highway engineering are continuously being improved. Concepts such as grade, surface texture, sight distance and minimum radii of bends in addition to interchange design are all important elements of highway engineering. With the exception of the U.K., most Western countries such as the United States, Japan and Germany have extensive well developed highway networks.
All Wikipedia text is available under the terms of the GNU Free Documentation License
Search Encyclopedia
Search over one million articles, find something about almost anything!
Featured Article
Tangent ... function) defined as: $\,\tan x = \frac{\,\sin x}{\,\cos x}$ The function is so-named because it can be defined as the length of a certai ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5234248638153076, "perplexity": 1998.339210280061}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987756350.80/warc/CC-MAIN-20191021043233-20191021070733-00269.warc.gz"}
|
http://mathhelpforum.com/calculus/166822-epsilon-proof-x-infinity-currently-defeating-me-print.html
|
# Epsilon-proof as x->-infinity currently defeating me
• Dec 23rd 2010, 04:31 PM
Grep
Epsilon-proof as x->-infinity currently defeating me
I think Epsilon-delta proofs and I should spend some quality time together.
The problem is I need to prove that:
$\lim_{x \to -\infty} a^x = 0$
So, I need to show that, for every $\epsilon > 0$, there is a $\delta$ such that $|f(x) - L| < \epsilon$ whenever $x < \delta$.
So, given $\epsilon > 0$, I need to find $\delta$ such that:
$|a^x - 0| < \epsilon$ whenever $x < \delta$
And here's where I am stuck. I think I'll have to take a log in there somewhere. Wish my textbook wasn't so scarce on examples (Stewart's Calculus 4th edition). I sort of get limits where x->c, but I'm totally lost on the ones with infinities, and the differences between handling + and - infinities. Can't find any examples online of ones as x-> -infinity, which surely might help. Nor are the two videos at Khan Academy useful in this case.
I bet you guys get a lot of these. Surprisingly confusing for what seems a relatively simple concept, at first. Help, hints, pointers, nudges and taunts all appreciated.
• Dec 23rd 2010, 04:40 PM
Ackbeet
If you think of $\delta$ as a large negative number, and getting more and more negative, you'll be on the right track; also, you have to assume that $a>0,$ right? Otherwise, you have complex numbers floating around.
How can you simplify $|a^{x}-0|<\epsilon?$
If you can find a $\delta=\delta(\epsilon)$ that works, you'll be done. How could you do that?
• Dec 23rd 2010, 05:18 PM
Grep
Quote:
Originally Posted by Ackbeet
If you think of $\delta$ as a large negative number, and getting more and more negative, you'll be on the right track; also, you have to assume that $a>0,$ right? Otherwise, you have complex numbers floating around.
How can you simplify $|a^{x}-0|<\epsilon?$
Well, first and obviously, $|a^{x} - 0| < \epsilon \Rightarrow |a^{x}| < \epsilon$.
Then, I can assume that $a^{x} > 0$ so I have just:
$a^{x} < \epsilon$ whenever $x < \epsilon$.
Quote:
Originally Posted by Ackbeet
If you can find a $\delta=\delta(\epsilon)$ that works, you'll be done. How could you do that?
We want:
$a^{x} < \epsilon$ whenever $x < \epsilon$
Or:
$log_{a}(a^{x}) = log_{a}(\epsilon) \Rightarrow x = log_{a}(\epsilon)$
So I should try $\delta = log_{a}(\epsilon)$.
Given $\epsilon > 0$, we choose $\delta = log_{a}(\epsilon)$. Let $x < \epsilon$. Then
$|a^{x} - 0| = a^{x} < a^{\delta} = a^{log_{a}(\epsilon)} = \epsilon$
Thus
$|a^{x} - 0| < \epsilon$
Is that right? Egads, I think I just did it. Hey, I think I smell burnt toast! (Wink)
• Dec 23rd 2010, 07:32 PM
chisigma
Quote:
Originally Posted by Grep
I think Epsilon-delta proofs and I should spend some quality time together.
The problem is I need to prove that:
$\lim_{x \to -\infty} a^x = 0$
...
... that's true if $a >1$... of course...
http://digilander.libero.it/luposaba...ato[1].jpg
Merry Christmas from Italy
$\chi$ $\sigma$
• Dec 23rd 2010, 07:45 PM
Grep
Quote:
Originally Posted by chisigma
... that's true if $a >1$... of course...
Good point, the problem description says "Let a > 1", which I should have stated. Apologies for leaving out an important part of the problem description.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 35, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.958623468875885, "perplexity": 975.0117672990782}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660882.14/warc/CC-MAIN-20160924173740-00070-ip-10-143-35-109.ec2.internal.warc.gz"}
|
https://feynmand.com/ml-vader/
|
Want create site? Find Free WordPress Themes and plugins.
Let’s see how?
Let’s pick up from where we last left and continue our journey into the realms of Supervised Learning Algorithms – ‘The One Labels’. We will discuss our very first algorithm, called, Linear Regression.
Remember this:
[i]
Consider you are a Commander on the Death Star and you have just received Darth Vader’s command to present him a working system that allows him to choose the optimal battleship for defeating the Rebellion. Everyone else at work, you decide to take it upon yourself to develop this system.
So, let get going!
Machine Learning is all based on data. So, consider the following data set:
Size of Darth Vader’s Ship(sq. m.) Rounds of Lasers 2500 500 1695 256 2400 450 1500 240 3000 720 2650 520 . . . . . .
The above data can be plotted as follows:
The question: Given data like this, how can we learn to predict the number of laser rounds Darth Vader can fire, as a function of the size of a ship?
Before we tread any further, we need to establish some notations as the math is about to get messy. We’ll use x(i) to denote the “input” variables (Size of Vader’s ship in this example), also called input features, and y(i) to denote the “output” or target variable that we are trying to predict (laser rounds). A pair (x(i), y(i)) is called a training example, and the dataset that we’ll be using to learn —a list of m training examples {(x(i), y(i)); i = 1, . . . ,m}—is called a training set. Note that the superscript “(i)” in the notation is simply an index into the training set, and has nothing to do with exponentiation. We will also use X to denote the space of input values (ship size), and Y the space of output values (laser count). In this example, X = Y = R (Real Numbers).
A general overview of how a Supervised Learning Algorithm works can be drawn from this image:
[ii]
Here, training data is the data we have. New data refers to different sizes of ships that can be entered into the trained model and Prediction is the estimated Laser Rounds the ship can carry.
When the target variable(y) that we’re trying to predict is continuous, such as in our example, we call the learning problem a “regression problem”. When y can take on only a small number of discrete values (such as if, given the size of ship, we wanted to predict if it is a Jet or a Battle Cruiser), we call it a classification problem.
# How does it Work?
A regression algorithm fits data by drawing out features from available training example (in our case, the size of ship). It can have more features such as the Fuel Tank size, canon size and many more. It is up to the programmer/researcher/data scientist to choose the number of features.
To perform supervised learning, we must decide how we’re going to represent functions/hypotheses h in a computer. As an initial choice, let us say we decide to approximate y as a linear function of x (keep track of what x and y denote here):
$$h_\theta(x) = \theta_0+\theta_1x_1+\theta_2x_2$$ ..(1)
Here, the θi’s are the parameters (also called weights/features) parameterizing the space of linear functions mapping from X to Y. The hypothesis above takes three features into consideration. To simplify our notation, we also introduce the convention of letting x0 = 1 (this is the intercept term), so that the hypothesis can be written as:
$$h_\theta(x) = \sum_{i=0}^n \theta_ix_1 = \theta^Tx$$ ..(2)
Above, equation (1) is condensed into a shorter form. The right-hand part of eq.1 can easily be written as a summation of θ and x. Now you might wonder how the summation gets converted to matrices in equation 2. Here it is: if we put all the values of θ and x in two individual vectors (vectors are 1-dimensional matrices of order 1n or n1) and use transpose (T in eq.2 stands for transpose) on one of them, the product of the two will be equal to right-hand side of eq.1(If you have doubts, pick up a pencil and a paper and see for yourself).
Now, our task is to, given the training set, learn the parameters θ that contribute to the size of ship. On proper thought (go on; use those brains) this can be achieved if the hypothesis computed by out algorithm are close to y. So, we need to calculate
For each set of features, how close is the hypothesis to y. Hence, we define, another equation (don’t worry, it’s the last one for today), the Cost Function:
$$j(\theta)=\frac{1}{2}\sum_{i=0}^n(h_\theta(x^{(i)})-y^{(i)})$$ ..(3)
To those of you wondering what gibberish this is, this equation is called the Ordinary Least-Square cost function and to the brainiacs out there who are laying waste on their scalp over why not simply use the difference between h and y, the squared function is used so that the difference is always positive. And no, I didn’t just put it here to confuse you. It is all through a very natural and intuitive process that this equation comes into play (the grand design is at work). We can discuss more on this later.
Let’s move on! The boss fight where we get to know how this algorithm improves its outputs.
We need to find θ that minimizes the cost function (the lower the difference between h and y, the better the prediction). For this, we need to consider the Gradient Descent algorithm which is just a simple update for θ :
$$\theta_k=\theta_k-\alpha\frac{\partial}{\partial \theta}j(\theta)$$ ..(4)
This equation optimizes θ for all the values from (1, 2….k). k is the order of the feature vector θ and signifies learning rate (another one of those algorithm you don’t need to care about.) The algorithm is very natural in the sense that it repeatedly takes its steps in the direction of steepest descent, i.e., the direction in which θ the most unless the value for is so large that values overshoot and the function doesn’t find its minima or it is so small that it take very long to converge. You must avoid both of these conditions.
Upon solving the partial differentiation part in eq. 4, we get which when plugged into eq.4 gives up our final update rule:
Repeat until convergence: {
$$\theta_k=\theta_k+\alpha\sum_{i=0}^n(y^{(i)}-h_\theta(x^{(i)}))x_j^{(i)}$$
(for every j) }
The process is repeated n times for every j. For every value of j, the algorithm looks though all of the training data and then makes an update. This process is called the Batch Gradient Descent.
Another instance of this algorithm that works as follows:
for i=1 to n:
$$\theta_k=\theta_k+\alpha(y^{(i)}-h_\theta(x^{(i)}))x_j^{(i)}$$
(for every j)
It is called the Stochastic Gradient Descent. This process, too, works fine and if you have large data set to work on, this would be a better choice over Batch Gradient Descent due to its less convergence time.
After all this hard work, what is the outcome:
We find the values for θ0 = 90.30, θ1 = 0.1592, θ2 = −6.738 and the plot comes out as:
Finally, Darth Vader has a system that can tell him which ship size to choose to defeat Luke!
[iii]
Applications:
The applications of Curve-fitting are ubiquitous. From optimizing biological parameters to astronomy and from the stock market analysis to weather data analysis, Linear Regression is widely to estimate results.
I think this should satiate you for the day. Use the comments section for any queries. Until next time, this is Pratyush signing off.
Keep Hacking!
References:
[i] Image Source: http://starwars.wikia.com/wiki/Death_Star
[ii] Image Source:http://sebastianraschka.com/Articles/2014_intro_supervised_learning.html
[iii] Image Source: http://www.starwarswavelength.com/category/dark-side-thoughts/
Did you find apk for android? You can find new Free Android Games and apps.
### Posted by Pratyush Kumar
I am a homo sapien currently residing on planet Earth. I strive to understand things clearly and help others do the same.
### One Comment
1. […] and calculated the gradients [remember how we calculated gradients for the Cost Function in the previous article] we now need to update the parameters in such a way that the model accurately maps all the […]
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8430355191230774, "perplexity": 842.6386848838557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948581033.57/warc/CC-MAIN-20171216010725-20171216032725-00425.warc.gz"}
|
http://math.stackexchange.com/questions/292376/verify-that-a-random-variable-is-a-stopping-time
|
# Verify that a random variable is a stopping time
Let $\lbrace X_{n}\rbrace$ be a stochastic process adapted to filtration $\lbrace \mathcal{F}_{n}\rbrace$. Let $B\subset \mathbb{R}$ be closed. Then $$\tau(\omega):=\mathtt{inf}\lbrace n\in\mathbb{N}:X_{n}(\omega)\in B\rbrace$$ is a stopping time.
My solution. Fix $k\in\mathbb{N}$. Then $$\lbrace\tau=k\rbrace=\lbrace\omega:X_{k}(\omega)\in B, X_{n}(\omega)\notin B,n<k\rbrace=\lbrace \omega:X_{k}(\omega)\in B\rbrace\cap\bigcap_{n=0}^{k}\lbrace\omega:X_{n}(\omega)\notin B\rbrace$$ Now, $X_{k}$ is $\mathcal{F}_{k}$-measurable, $B$ is a Borel set, so $X_{k}^{-1}(B)\in \mathcal{F}_{k}$. Since $B$ closed, the complement of $B$ is open, so also Borel, and for $n<k$ we have, by the same logic, $X_{n}^{-1}(B^{c})\in\mathcal{F}_{n}\subset \mathcal{F}_{k}$. The intersection of finitely many elements of $\mathcal{F}_{k}$ lies in $\mathcal{F}_{k}$, so for any natural $k$ it's true that $\lbrace\tau=k\rbrace\in\mathcal{F}_{k}$. Now, is this solution correct? Is it necessary to assume the closedeness of $B$? I have a feeling it's sufficient to assume that $B$ be Borel, isn't it?
-
Your solution is good, though your intersection should end at index $k-1$, not $k$. – Byron Schmuland Feb 1 '13 at 22:13
As ByronSchmuland already mentioned your intersection should end at index $k-1$, i.e.
$$\{\tau=k\} = \ldots = \{\omega; X_k(\omega) \in B\} \cap \bigcap_{n=0}^{k-1} \{\omega; X_n(\omega) \notin B\}$$
The closedness of $B$ is not necessary for the given proof. Since $B$ is a Borel set we also have that $B^c$ is a Borel set (it's a sigma-algebra!), hence $X_n^{-1}(B^c) \in \mathcal{F}_n$ by the $\mathcal{F}_n$-measurability of $X_n$.
If you consider a stochastic process $(X_t)_{t \geq 0}$, it wouldn't be that easy to prove that $\tau$ is a stopping time - in this case one often assumes that $B$ is open (resp. closed) and left- or right-continuity of the paths $t \mapsto X_t(\omega)$. But in this case it work's fine, because it's a process in discrete time.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9711276292800903, "perplexity": 121.9945153892792}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997892648.47/warc/CC-MAIN-20140722025812-00016-ip-10-33-131-23.ec2.internal.warc.gz"}
|
http://ufdc.ufl.edu/UFE0022377/00001
|
UFDC Home myUFDC Home | Help
<%BANNER%>
# Theoretical and Methodological Developments for Markov Chain Monte Carlo Algorithms for Bayesian Regression
## Material Information
Title: Theoretical and Methodological Developments for Markov Chain Monte Carlo Algorithms for Bayesian Regression
Physical Description: 1 online resource (94 p.)
Language: english
Creator: Roy, Vivekananda
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2008
## Subjects
Subjects / Keywords: bayesian, da, efficiency, markov, monte, multivariate, probit, px, regenerative
Statistics -- Dissertations, Academic -- UF
Genre: Statistics thesis, Ph.D.
bibliography ( marcgt )
theses ( marcgt )
government publication (state, provincial, terriorial, dependent) ( marcgt )
born-digital ( sobekcm )
Electronic Thesis or Dissertation
## Notes
Abstract: I develop theoretical and methodological results for Markov chain Monte Carlo (MCMC) algorithms for two different Bayesian regression models. First, I consider a probit regression problem in which $Y_1,\dots,Y_n$ are independent Bernoulli random variables such that $\Pr(Y_i =1) = \Phi(x_i^T \beta)$ where $x_i$ is a $p$-dimensional vector of known covariates associated with $Y_i$, $\beta$ is a $p$-dimensional vector of unknown regression coefficients and $\Phi(\cdot)$ denotes the standard normal distribution function. I study two frequently used MCMC algorithms for exploring the intractable posterior density that results when the probit regression likelihood is combined with a flat prior on $\beta$. These algorithms are Albert and Chib's data augmentation algorithm and Liu and Wu's PX-DA algorithm. I prove that both of these algorithms converge at a geometric rate, which ensures the existence of central limit theorems (CLTs) for ergodic averages under a second moment condition. While these two algorithms are essentially equivalent in terms of computational complexity, I show that the PX-DA algorithm is theoretically more efficient in the sense that the asymptotic variance in the CLT under the PX-DA algorithm is no larger than that under Albert and Chib's algorithm. A simple, consistent estimator of the asymptotic variance in the CLT is constructed using regeneration. As an illustration, I apply my results to van Dyk and Meng's lupus data. In this particular example, the estimated asymptotic relative efficiency of the PX-DA algorithm with respect to Albert and Chib's algorithm is about 65, which demonstrates that huge gains in efficiency are possible by using PX-DA. Second, I consider multivariate regression models where the distribution of the errors is a scale mixture of normals. Let $\pi$ denote the posterior density that results when the likelihood of $n$ observations from the corresponding regression model is combined with the standard non-informative prior. I provide necessary and sufficient condition for the propriety of the posterior distribution, $\pi$. I develop two MCMC algorithms that can be used to explore the intractable density $\pi$. These algorithms are the data augmentation algorithm and the Haar PX-DA algorithm. I compare the two algorithms in terms of efficiency ordering. I establish drift and minorization conditions to study the convergence rates of these algorithms.
General Note: In the series University of Florida Digital Collections.
General Note: Includes vita.
Bibliography: Includes bibliographical references.
Source of Description: Description based on online resource; title from PDF title page.
Source of Description: This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Statement of Responsibility: by Vivekananda Roy.
Thesis: Thesis (Ph.D.)--University of Florida, 2008.
Electronic Access: RESTRICTED TO UF STUDENTS, STAFF, FACULTY, AND ON-CAMPUS USE UNTIL 2010-08-31
## Record Information
Source Institution: UFRGP
Rights Management: Applicable rights reserved.
Classification: lcc - LD1780 2008
System ID: UFE0022377:00001
## Material Information
Title: Theoretical and Methodological Developments for Markov Chain Monte Carlo Algorithms for Bayesian Regression
Physical Description: 1 online resource (94 p.)
Language: english
Creator: Roy, Vivekananda
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2008
## Subjects
Subjects / Keywords: bayesian, da, efficiency, markov, monte, multivariate, probit, px, regenerative
Statistics -- Dissertations, Academic -- UF
Genre: Statistics thesis, Ph.D.
bibliography ( marcgt )
theses ( marcgt )
government publication (state, provincial, terriorial, dependent) ( marcgt )
born-digital ( sobekcm )
Electronic Thesis or Dissertation
## Notes
Abstract: I develop theoretical and methodological results for Markov chain Monte Carlo (MCMC) algorithms for two different Bayesian regression models. First, I consider a probit regression problem in which $Y_1,\dots,Y_n$ are independent Bernoulli random variables such that $\Pr(Y_i =1) = \Phi(x_i^T \beta)$ where $x_i$ is a $p$-dimensional vector of known covariates associated with $Y_i$, $\beta$ is a $p$-dimensional vector of unknown regression coefficients and $\Phi(\cdot)$ denotes the standard normal distribution function. I study two frequently used MCMC algorithms for exploring the intractable posterior density that results when the probit regression likelihood is combined with a flat prior on $\beta$. These algorithms are Albert and Chib's data augmentation algorithm and Liu and Wu's PX-DA algorithm. I prove that both of these algorithms converge at a geometric rate, which ensures the existence of central limit theorems (CLTs) for ergodic averages under a second moment condition. While these two algorithms are essentially equivalent in terms of computational complexity, I show that the PX-DA algorithm is theoretically more efficient in the sense that the asymptotic variance in the CLT under the PX-DA algorithm is no larger than that under Albert and Chib's algorithm. A simple, consistent estimator of the asymptotic variance in the CLT is constructed using regeneration. As an illustration, I apply my results to van Dyk and Meng's lupus data. In this particular example, the estimated asymptotic relative efficiency of the PX-DA algorithm with respect to Albert and Chib's algorithm is about 65, which demonstrates that huge gains in efficiency are possible by using PX-DA. Second, I consider multivariate regression models where the distribution of the errors is a scale mixture of normals. Let $\pi$ denote the posterior density that results when the likelihood of $n$ observations from the corresponding regression model is combined with the standard non-informative prior. I provide necessary and sufficient condition for the propriety of the posterior distribution, $\pi$. I develop two MCMC algorithms that can be used to explore the intractable density $\pi$. These algorithms are the data augmentation algorithm and the Haar PX-DA algorithm. I compare the two algorithms in terms of efficiency ordering. I establish drift and minorization conditions to study the convergence rates of these algorithms.
General Note: In the series University of Florida Digital Collections.
General Note: Includes vita.
Bibliography: Includes bibliographical references.
Source of Description: Description based on online resource; title from PDF title page.
Source of Description: This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Statement of Responsibility: by Vivekananda Roy.
Thesis: Thesis (Ph.D.)--University of Florida, 2008.
Electronic Access: RESTRICTED TO UF STUDENTS, STAFF, FACULTY, AND ON-CAMPUS USE UNTIL 2010-08-31
## Record Information
Source Institution: UFRGP
Rights Management: Applicable rights reserved.
Classification: lcc - LD1780 2008
System ID: UFE0022377:00001
Full Text
xml version 1.0 encoding UTF-8
REPORT xmlns http:www.fcla.edudlsmddaitss xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.fcla.edudlsmddaitssdaitssReport.xsd
INGEST IEID E20101221_AAAAAE INGEST_TIME 2010-12-21T06:50:03Z PACKAGE UFE0022377_00001
AGREEMENT_INFO ACCOUNT UF PROJECT UFDC
FILES
FILE SIZE 32630 DFID F20101221_AAADHU ORIGIN DEPOSITOR PATH roy_v_Page_09.QC.jpg GLOBAL false PRESERVATION BIT MESSAGE_DIGEST ALGORITHM MD5
a1212bb2daf0ef429e799cc3a97c00c7
SHA-1
1052f9960fe87b0ee480ccef2d983be5d42afb9c
6eebc8ccfd9d12a74a20176fa823499a
4feca1ed2d6f72e3a314f0a2cb33cd771b7573f6
9c27d62f1f0c9865f6ac72d458e3ee15cf9c3c5f
b93092f0865b09c11a59a7a4a8b5ba17787b7f65
8efa872404e6f7cf1bc31aa098b552e2
4044bd00c7798c66c2037887f949589bc3b12f01
f758892c9bc9a39c92aeddbae667586b
37c024a987823fdd08e1cb24d759f028ce348320
2a85af5fa66222295c0cf60ab1affd34
2510b931ea564309136c2216905cf09c4384ed89
75af7de1e93abc6e4350864f5fca240a
1b5707db7323274690c9e65258137737d3eb2e7b
554a5a8ecbf92f45e180f1be1c888b46
ac4f1260480ac22be4877e400e7aa530ac4d9b4f
3a7ac8da5ab15fc04eca999b1f636b51
226a78a8ba256778fe3a616f824f9ac917f9e611
5df53ae7ec5f3c5ff3e6960248b4220c
70920abf4f72bd40f2753ee6db26ac805ac9e792
8f78226835b28bf78f2765b8016a59e63ba90e24
b97049522d3ec6fc92924924d65f35a1
bf619885f2b5d721e2cabfdb7b7e6107b92c7c5b
a73313c32bfb30195f97f1e71a35b7d5
63592e3803c5ec26b7fc2c5df729258d
1b4ff5cd483650d9666c7c81d2fca129659f8983
d7dce1dc2a83faff6b399afed7612a25
b743e5cb2c2ce7b66521657ab3a77595ccf53550
5b9c9db7668d9d30daced6261abe8e66165f697e
5487fc825e5f96d03d8b934d519f8130e2fb0c11
ec220384b76e944ff97f3e5b9d10285f
38f737fbb9f28bc0c65994317c387b57be08a3f2
2c34674ff819ae775acbecd50bdf3ea3
398ba1eea2f9b4a46ecc1350ebf5c2b6b425d0b3
ee7df206dcba369624487e7564838eb7
8d80f392b52c5c7d02b90f96a37699e7de6b81ee
f32bfbb1ce31eebab2620e91239df365
9dd99ae1d6559db6567ecfb1cfe4d88f3d2bcc55
cfd97f2b9666fd1c3a5a4b9c9843ca5f
127ee16218c71c5dd253da56e43c0b7af1e37808
ce1b065bcc1ff5d80f1dd13120e7245627c6bb65
12c4a798a94c3101613a69b9d0212c96
fb951fd4b678b47aa852674e4256e4f0e9b50ec4
c8f0da36d4348807a0e165c0d7e9f007
7bccec8458f573795045b2fe22e5777d3999233b
7bb57f8c39cb693c63dec3f1e0ddff20
7e175cd9be8996f4b50b9fa50823aea8dde6df80
8f8cf0d65acc99529225a39481f227ba
252b6d4be600c67559323b5deefca1595bbf9004
36965c13f42656dd753b1e945f77332d
1706c5e3a991f083970f63ee9b4df8c6e35efc60
6384ba1f1b64d2673a862ded8e2aaf48
56257d710b1f068518620a918426bf8b75a1b807
f72ff715c90f3198367888582950c602
cdd2d3e021c598165ff88502d2415c4030dcddc8
ac30e080f00c9fdc5f82e6c4f56a73e3
f123d95695acdb93982cdc2f9ee7c2a5a5c6614b
c63a1c50610d60ec70e72815103592eb
d88c9668a23a47723327151e858c81656e9364e7
8e92de840be9d9ebde3f389eaed1f138
c37fa23aa6639e89d868d18ee24431ac18c68dae
22d06bdc8d67b4f732186afbc0b00f87
1444728af20771cf9267b6173a7c6bddf2e13f1e
afe324191e790ac28f2050094c47cba7
01d68e35876a8d215bf6e643f82a42d99188ecc9
4c105ce90baaebde379ca02ac15e13b3
5aa5f0381935fce39cd835b19a0cb4603db80c16
b83210a4a592564298ea4d92fc1f518f
e0027ae1a753ee22228cf0f19d7b2a831d7f3506
90e60062b12773929f573c2338a98994
e6f8a005ed547695b070bd38e9207a49a33b74c5
6182ef38423e5e8bed9026eff1ea1117
372bb8ee271f1f48706f538174ec3108
11ed256b38d49ea4a109a7e7bea8ace82dde0fdc
424e20a412a43f8bc16bd10fca3befe6
8ff35a242d2b596b5f80d673b54b5f2544b7e139
4f1e1d4a7577309290f57784e057252b
fb3fe3775137b66d8f6163ec35a51bc255ab61ee
d298fa94fb8ddf8f782af738daea1d4c346156b1
9f47f119f039853113523912619f262f
63ce525e19388dbe1eea68b11631baf424610f0d
39026c98e49a502f4522a986ec4cb19c
0ec4f57d76b521abc86ebb9b3a41bd551ca69463
0dff7cd788764632fe3f4d9bdb6399cb27c36fa4
72db2e240bf83016f424be53d1c4bee8
b88499a0631212e39f922224b04cf9f9e3e2d494
aaa338141198a2541f08d4b3f9b65d80
c640208c1b727330674925bd0fb094450069ac77
c65afabf4877eb49b761a889621ca723
99c4752ee5776ff24f47924209721f8a97288b80
4d90f751035db282ce40a4534414363a
7cc4ef1ede1c5f19689d7ec7774d02cde953fa41
b5bbd4facb86d0f8243b45311b2cb5ac
1bc1c06388db4674f653d44471b3960e6f8e8fc4
084887317b7cf0eda175bc943b8c574c
d2e6d6709d683041999e140f8623cf57620e20c0
f607be0f851d547b139c28a5805d1942
8093478bba91d279f28f7b2fe8b9c7a4e7631dac
286d1471cd586fea6d9c0a0a1800c2b3
7606a343be54fbc7e5a8c216e63092b2
c494364d154e1898cec2b2c4f388017b6e7f2482
c78de6fdcbaa6189155b8dce87bb4d52
b3cf68330134673a2bccef4a524f51fce923cea9
72ce87bd27fe3c0100b663080cedbd44
6b22b2ee48317be867b75c00533f1c5ed1fdd55d
e74a462546a9d6297e0c8818a42ee44b
301335f988864621b1913fb142cc7c53b4a1ccbb
4750f8d664a1e198ceb78da21f4a20d6
5858778672ac813069575650fec3fa678919cb46
1d5c7b908138ab380ba9505f145151b3
938de4b8843ac982d33a1bef4f6a244a7c1a8e6f
a67729f089332af51a161e5e68d51c31
df0aef9cff1cb56133977aa355eca54aa9b26ca7
b8dea69d2938441c3eb07d9ae972065b
6b19e1b152f53ba1523288d60e6f68a0324388ec
f745a713b4df682b53881d6c3c4ecf6c
58afb85759d47da8a38b83bee1e13f00
2d3cc4c690ee79ec27af903e8612c7607ea8cd21
443dbfa8980717708ca785e2d9fab692
b7e347c3158331dac4d4d9791f1f54af1f48f47d
9be8a5b1e2bd736e14f2403fc8b91f05
296928150b5b07d57b3f75522b267318
6bccf2efdc308c6311dbac0e3c09a38c8f56fbb6
f326bdb746c2538ca3f35506e12f9809
81de06d7e804b4f1309a394ae45a2bee445b87fb
0843391f8a4239ac777648c65f4e4f2c
96ee77964e81fed289307b3307bf70f301ed0bdf
77411dbd93e07e1ac5afe75c13f4c911
7307d2e4aab16657e7d0c00bfa931cf0cf876a66
68e805cbd0dcfabec3c3b2227fd6bab7
7cf4f8a22cddf084d3eb6f732eeb73e73d4dd928
ecfd7d9df463e5f06c2fe9c53c9ab631
4136919a74b09e52a22c7a733f0711663283719e
10b1cb81e9c4a7d80308ec7aa6be38a8
6eac96d0fe298330e7826f255e1e4e54726c3c5d
0ec4a9fc6d2c3a22aec4399aafdfdacd
3351cf11ae0e180e1a8d4ea9d54c6f11dd2f9731
8a3061eaebd3a52af9678a01395c763c
8277a79bfb75f57bd93b1bf52399429190ef2b0f
24dce7317e6b8925893778c3a279ffd3
2e4abaa7700f9dfef57b8a1afe48b29034acbccf
da2e1358e94c21ed92defd8d97e276a8
53d7bd708140b0c6defe330e9908b4b02354fe03
41ffce568127754b599722951b6e34a5
4e7d1fa81bd6f7c9d7c464f8233172bdacfc66d4
a8ee286533bc3c0bd585a046e5746350
2e41e4df7794a6dbfb1e1b6820e32f123eff8b35
bcd77841b91845b936a70a26b60b62c7
2b6f39d570d541d232412b3a374d297176b75d60
33fc8c5322b43d26edf2efcb33069189
724d2a40ddf440b8d1af9625b29101abb5d84798
cf70c1e6a2b55a34a793de1526b4e837
fbe75d2a3bfd673ded739cd9a37e428344318ab3
3f44a8feeaf5bebdb5421e767c7c07b0
e1bf6a3d45a1d88531aff465be9290f6a631fc89
b5b7cc4dbba0cc0b171df97065aeabdf
3bb051351601c0ca96a4dd9159aed98bb7c97248
c2cb9f79a4741193c36a142b1f7f13d4
c8160ee785bef21f385a91190c77a37540337767
80c2e615a5eda343e2887e41531c677f
ed96bc158b52abba4bc7f22a05e0e34b2723ae22
81de12c4f62024c4508a7103f9cd9037
bacd62ec414d2894edfda23eab1968142c20d513
6805d9683eee0b8e6880e18bd3d3e38f
3e87feeb497ec8ec829dbc566979d33020c92714
3cf7491f31314d60ce72162bd80ff53a
1c9776e674beda277f3bd26b1f0af2324af3fae6
1f621e6dc42fde1bb59321cff936918f
8204c16ac172fd11387395ae140f64542a42c9c4
e40eef7ae3ef6055e45b16592e43a5f9
b79571848850dca03d5e3859ccede6e1fbf55f3e
fdec4d1805171310c12cf0de4ab38539
e9253930240e37160ed86824e01b7655
570f76260c7da8625d835d6793839cdb01b9876f
f7971a08ece2ee2e06b7eb9c90836e26
6fa116617ee3ec4b8d44350b6c607152a11e07dc
7a82b9ebe80b5dbbbb7dc8f767966625
349b7fb21651ac7dd270b9421f8d6ff6ab5a05d4
ddd55310c79df8772944d672570948f5
70f1767acbea1f92c2992bf8e431932df4f1db9b
375c24ed97187cb964362875b9f1f65d
fcd62ba32e4bf96d63b9e6fd72d7eb170fa4449b
f7ec90c42c0c13481a3d432f30f8de36
8209316cd514e43ac1700b9def6a4e0e
d45b9c9277ca6cf8a8c6a9b6959249e1d5126edb
351d8237dcb9688820922460560ecfe4
62c208600637dba00b406bfcab7587247e2ffb7a
8f4928858dd765d7f32bd40682685b72
ac3c49013944ab9c75fe1f052487f867bfed05a6
52d288f46765c0535479100a7f9850aef0d778b1
b82e63bd013e3cf44025b19019da628a
24de80d0f60be4b70c0c203c41f40fc8c28307b5
8ec64ca2b3176a22a6c1d55ace4d8033
208d1593d50940db3b72dacf302dd84aa58e5ab9
573f762be4601dfe3200f267726ed05c
f1db42cbbd6bbe2503fda012da7c6d5af3ec8251
bb23bb8959dcc4d9959131c45b5b0122
2a5b8e9605d17a3233ff8560e70ae164
2b3ea6e305ac37281b47d4aec7c1bc26854f213b
ced69128151e78473b1e40ef63b313c8
cde33a8d0153f052a9c7db2ed267f1f90a8bcea7
8f900e36193a8287d3c00bfec5e8124e
921c9bb2c4dcfd37aa06725bc3339e6050307022
33c708120ca5cf695a7f5afeb636300c
ab8a14b93dfb46aa76a1f16a68a52ba86fc3684e
d0cd719d909bc70d2d5fd318c6bf625b
23821f5fcb3f398015b7b06a51736d3e57c95f27
eb7e3bac020b33145b2e5e1a9e030666
660c6a31a87a853c9af6861e15610b7b9b50de97
6103c30580897784e0c4b37c0972c7eb
f7dee398e10e5300912d16a5a0941d77d84744a4
b969a108fca616d53761455ed1fd515a
51770bb7f1aaa35a2de4390059cf1c330dc07a27
8d5269fda08a034615aea691b164580b
c67d038c4e91a405cb513e460fd661ef
4f478efe980fc6fdcde20cfbf9e41205a0bd2edf
b12507a2a8154bdbdfee2a357a60e024
c083cfc7d93486c242bc9f00ef7f6312508195c5
cb5885e91f6bb396f3bc6c078f59c288
7da05a780a7e1464e83701491ea50bbe89b46d63
968880a3b7fbaceced8fcf7aebd08637
c7f530b554183c2a43726a92637dbceca4ecaf00
338963db1b42431d8ee890aa6fe50713
6eaba720c5d0c9afa97ffbf1db005575
cb30e8a4a85f314512f42a9c7ac8b06b8b2b4a9e
a838894fcb20c0f130b768576ec16b0a
187b6e22ccf3bd6a14feaae613f03ec5cd000647
2377dfd3ab39d1d51c57a7613bd4feda
31b5d85909acda09b812b0761fb4403bef3fbf86
312919f32e02acc787be716bb5e9fee0
e1ac3a46f87f46da4bb9bb79df5fffc9a9950375
886cea20c76487d7446f3265e94ee3b6
91df7e223454d3a57255d727008c301cf19e7856
ee099825fbcbb3715fe14642e0caed4c
4f733e0abd8b7e976ee5faef7999c60d44d4d792
511b6ae9b587ee8dd907b4fd5a9ca360
dc452c85508e07d4239f6eed533b845a
e775d7f10faf673130351e2f2bef0b0650245027
19aa7b8d6310425ab6fca9083dc4b027
65b2930cd7cedcbb60e99ba15e514d719a5f17d8
839d653c6a7b4acf9e078b7ba0fd54a8
96eaae1bfebf338702c4ae427c9525c3279eac37
19d0d9195808cb773da9b167fc2d2cbb
0a3c1b96bf9d15b94ce5f33704038d44a49a3c78
405a008a8bf42385b05588323517cab7
d9b71213c0e750ff53acfced7458c1a4292a7720
b98daa7567bfe8ae021f91a1d4c2e698
c6e2297cda3e1e2b4489e9fc36207a41
d1e31f1a0c09437b8898220f654666933a3a93fa
71f44616fbfd7ca07e1e736881c76edb
643c59c8d1fd9da0b1ef337be8089e4d94bc8562
8dc9373a633b17dc04ac3bb34fbb39e6
35b389323ca75be0a8fe7cf9c86810306c819d5b
9db7c645feb5839cd8224786aa9dd766
8ac5fd6f57f1b3197c02f3a28d1c477f7655da8c
92c5f2613f3d3f75c04bd808a1663ed6
6455fced075f35fabb5fc58f26d48c869b52ee92
435621e27d742504094c7d056e967d242050a312
9fa73ca66accbf16f0fbe82b721b5b92
aa32fb3ebd74689e9dffe5f551b6018802543680
d39e602845e945867ba3136978de7a2d
c3508eab28ba3577e454f342738bd5c393db9a3c
b50b98728c40cbb4f836fc942db8760d
02596fd70944874c5fd3ef39bc37e768ef2bf393
b091f2b325f44b77560656882ef404a5bcae6600
02552e05617b36c48afce56ce3da83ac
209f2d914f6acee59fc932102513c0bb24e6e47c
a27fa0f65c0b8df2d835367bec1cfcce
b6c6a2cb55e88c54439d5cef341e4829679d89fb
aae18c2f71f881a9d6b900444ca01009
3dcc1c0cbf1411d8c16655cfafb17812929f5e7f
6b24abdd7064c50549925f4a06985cfb63b1841c
bde4872b7e07f96111eaf8d1a7464de4
a1246d8d16271801381c178fb50ccd4697e4a63c
bb1b756b03fded58023dcddc3bc928c6
7523525a05ae4d07bdddd1eea0c66a833cd9ecf5
a94d9e31413c93859eefa10b439f893b
0fdf67850de12e954a88d1e5d8bd0ef6
2fd35d110f1a15a3e0830999590e079fbacebf5e
60fa456bca8e8d9f81432eebe860839a
d6f1468f74554f2d6f125ced94134cf1f09cd938
525ee6cb588a88d69dc4e7583b07bd73
33c6d93674a486d368d6f046d3d17ba888285597
040db4f0bfebfe7cc7a437394aabb2e21360a972
6430b87f934cfc68c1d18f883e8bdb8c
7b86ccec0471818dd4fa906336982b264cd39592
78a45fb3cccb48dc0c6b14ebcfa9a332
c4eeefa45d39dd4f51ff76582415661df8278e90
6285c140131d0b3813bb1cc0ee9816ec
9ecab89a30eaa7a76ee6ee02f1f29b392d55eacc
b922824cc3bcaf3a3ec8dc54247e2a09
c57eab85c21fe2fba0456f056f79d3ab
fcbf48a884df7d2cb4eb8922f5eb46325db2f90d
06391f256a54b33c5cb0af28fa0cd6b6
2926d6c0e5463808d97ac126c59bd19e820f4e6d
974ee12230d80aaa84dc9333463b9507
1b3b71a8c10c4240e42cc215c03487f2da458cbb
ecf8f6931d4ceed2e2b52a9f7523cf29
54c243fd57cf352a42f5bf24622dd6fc76817eb0
a10122dacd68a3458f9aed7fe4f39d2a2ede70ae
a9c3048e058d5fd937c2520c7e39ba2f
5a15ce77e5f4e1b84c87cdf93966607ddf24b2cc
fc5453bda832a05cc9938bf63655a82d
1c2ea4117958f9c1cfc139319a50185923293778
4d19e8b5467ba27629ae6ce686bcde26
5e5eaa8673268f96cc989166d917fef7
1363a06a5ab8ca34337edbc428497e79fc5770ca
e81a705e224731debca177e242f98382
bf0c2a9475979ac5d4f5202463eea5d36e23a3c4
d9645fb22d72fe4a9e9934d41c4f9649
af9d861a16acc1d1eb1e99cf91bc29c47ced6792
aca3a3c4e1abb251335923007e6af346
539c85e106b36626530e1ec58825d84886fea966
5bfb155edfa3b5c3c5e31f87c7515cc7
7d45582fd947f65bf314c5a22b05ed072efa69c5
661310980ca396098b3184aa735e896109fd56ca
931faa3cdd1b82db6d6f951f82546e3c
54a549a5ef4eb7c8efb15245c25db83380ea7b37
60cdf66fdd9ab44b39fb6d075ca915632a8f0b74
8e19a9396a5d71545dbbe3326e669ac2
3a321be1052bb889ccb88233de027ef07d3cb89d
89fbd496c17180a530dd44ff43d48d1c
54cd082a243cc09281ddf783d366de64
c22220e8deccc0a25940b014f0a6c65b0ed028ec
20472247059cdc6fa598765f7d55f135
d1b1f3435212de193d34df5c98907ce0b7c7045c
23bba614857bf3bf87985004e5a8b87b
126c352d3c66fc9a8789455b59de7fd0a73bfafd
70fc9e08ca51668eea1a388e6b59491a718672cb
8bcb880197ca9e446316bcf36aa8e033
5020954510c406c72664d3d44a11087106de8be1
03f749a0dc0eb6ecfe3d917a4fdc44e9
09e83ab5a63cf4493632f3981dc43807
193d1f7b09458be5f9861eec45830cf0a929b127
b007314d8f8a405991e30e10ce341d2d
59a8595efe94945ced045284da3247ea9669367f
5ecbc00a7ae3e2bce6be65155c162b46
884cf18e0bc8291df886e150e70d2608174a41e3
c5f419a0c750d6893a5440f866592c03
857d44954d4d06b331326761b3d23c720b647925
e11c65c4f0db09932a5204cb3416cb1d
a8203cf02c9061268119db6eb95416de12bcf7f6
570a43e017dd016428dd36104ae18d65377c8bbd
3417a82bc7a9b96ae654fc2ac02bfbe1
4131f215e1dde207c29b6f8a4fc5a2a2a3c86748
c1c77faa5b2dabdd7e261fb7b99f1c96
e665aca6ca9f6a51fd70806b1b18d6f10b351f04
d891bdebf48bb05345490a287a53174c
818e8aee7c90fbdd1c1e0d3d61cf6ed986579e8d
7cd93ded30bd5808f21a39ff53a57282
b2379252009fe155c0c9749168ba147ac3b609d9
e61c97b7e8d1c6986fd47498735606bd
64ab85a34b6bc70b6dd3596864e4e55e60c3370d
fdcf6b2689bfe9ff3faa2b7407d76e44
e8c8ea56ac313f4d30c39b69249953f9ecd65eec
48e9eaaa667d88fd0c4968999ff88c88
9ff05a7ba106fb155fcd990a461d80b8
57af3b554d3f069cc25a40cfc82d28ba
6ca5d6e50a174ae338a65b6ed796acb81e555dd5
b354924deda9b883f10f64c587674dd9
9cc8176ca64a8c7188f0be62f7c51e645ec4cd57
8f3afd781367fd86653556dded0919a1
e54bd8f2a9e552b4c3a967782c21aaf1
a23421fc793662f567e0545a6a66f59a
a0afb87d2a80814a2c6d2733e4132cf22aebd805
2a82aecdb89bfd05a7e5dfa0ceeffca1
1b12ab5ecbf00df5fdf2dcab2e5310cbb5c6bc2f
ac3403326d028ddffb97787a877370f3
393a6edf40a51a918eabb80996ca957364daaf8c
e05533d2673a64f8e091e16b63631a91
ca5a3cb606615a7490675c21fe2c77e30f82d75a
db685832a5910d4ca5093495da2d3e166d4a4d84
bcb404d63743b8396d97c4533f5b9cd9
1dc7454e8bc6a95c7b4acfc0d3d8dc00e9c51860
3e56998b91320dcbb1d913a90662822a
bb85a6226ab5a1be1a44fa8a607b6f471a9d57de
eeaef506cc96671f9ee7edcacc51f1a2
c8283640a2036715a1e985a8b7c02944
07eaa986030fc4a79c0e31776b043c95c284950c
16492bf57cdc90885496fd104b5f239a
43c267bb309a090ce06a2958614255a844f556dd
5bd139aaa55ff4ec55cea143a5342f8d
b320494a8e4817cc80a6c12bf1f113043e0efe2f
a5213dbee75136fdf072598f51891895b00c342f
f8cb11df66001d61149145655de1dd404f8c297f
39884ef69a3842ae07342802a45cda8e
4287942af05356637d150f27d27853c8ee1188cd
c9cfe8c73014dc11fab730e93d6eaac5
cb286472faf6fe696f1ac9b003148318
15c2c1b3f40bef7ba6d18890f13a18328c9abfc9
f8e19e5d919aa454350297de681fa303
b32ebd1c33332700eebf693cda69e2d999fc73ed
b68e8fb91353f4d503ca5484eb31d3e3
5fbf02b4016b193972bd7b1c713d8871a4f6b478
14c83ee48fe155a89b8495f016d03ac7
dddb75ec495bd82cedda2239afd1849afe802d0d
37fe4c8623b42eb4c55a0e7b5cb021fa67b232aa
30e59003bc794e3b75272474f8b96149
a687f07d1be3e1cf5c88e9b5f87ae727f5a3471e
fe95962036e36ca3ec3b1210cb030d9d
c21bf1f8a5d1dff58f4015b5ae8b3fb6111737c6
7fc453436668a3fb89a39a60b2918454
598aa49318bca4c47b8749b8170ef9eecbac0cb4
64a606023b9092205f75e18613d64894
790ca0fcff334a71ff12f1d60f593fcecd145bfe
bd58cbbc8c75a0a246b3841864c526eebfe03d55
877d6de64afaec67b05b67ff50224aa8
edd49f58df1e463a4d7a6074eb90fd4e6b0d84ca
0604d800a769c99943d63b23bd5ef03b
c656aba966b711689bba7e7464f946bf
5564aa086effcb1c593f089bb7d8d940dabc7097
205903a80c920d935b261e2b162194ec
697a5818f5fa5b6377221f1a3051d629477d9f48
32efcca7eeb8f687be1e14562e43b220
85a3409674effac75b8286ccd5a0aa79245f3b27
dedae96de01dc17e30d74629708d2659
9a247afa8bd253b7a9d13d1e8e43531dfb39b7ab
4c857f4bb889ec5c67b40205a559a143
e672f9817a06955612db3f0ec952dd4fd4d1b8e0
49e20e63c2a000c4978489189d4830b8
6116f222b1d493954e04d12ae9dfdd92a5a66280
f83f0077929c0cf4e6b7967b4f3f87f5
c3724f2283f33ef696d27e9dafa1ce06b7b7936a
8729ceff5aa3794ff81baa057a78ef9c
e286628e27fbb3b371ec55cffe600d37b4e04d21
7d8688bbf8d4505ca6ee309aaf6d54d4
886fce0f7d1f865917a2d30fe62fc7d401797c9f
b56f776a3d8e1a557fde9d57cbbae256
c17d40eb142f25ba46def399d10f211b82b1fbb9
43305b2de185e559a6b5e51f252f1582
5669c3ea2bdc366a8b605a2c2b858440576ec4c6
d25026694ee7dbcbcb5fcda2b303acd0
6c9abcc26507d4fde61a318ab3ae0d1097a2399c
25da557d7347bd6d1b2c9bc7208dbf1ee7fa7447
926172ccc2ecb74b90dd989d87f3e821
ed3fe736828a8881bf2323a7f5d48d3c9ebab3b8
14d62af1c2a130517ef3a1e090ea4ffe
22377538874f5b7deff94e446034f035
21fbe86963d5eeb4d3d556c83b063ef0600fd38c
fd0b506eb48e0a9fd63d78e93ea70ee9
fed8c25bfb884ca1c565c2daf77c8de2968d36bc
ccbf9b723885b3769376bee31d29e462
0e1e422ddbf017c19444a9ec368ac5298a6500b4
212044a1c7fd9ff896a0857eb29f959f
46bf1b3cb63479f8b013a063f13f2bf7d94c697b
1c1fa35883e394a8ba32558b556f4f45
aea0ca8a0e1c2469976dd6158c93d25f8d94ea5b
e15750b6737327c7b472f147c6a44a1a
e053e397837c453e0b1510da1a876a706a41157e
e2011fb8b4b5347a559dc0c3996feedf
4c3a335e72e60e2dd39cb4c85773340590b3af3a
387224332a810eecdb4dbc84d5c7a779
4d0bef0c68f58067efa0537f3e7d485f29f6b72a
dbd85098d4360d2818e475c7c3136c80
8663384315ee6bf5bdd495d6325857c82f73542b
43f47430cb5e95ea85c312b9c6268b6a
006d0b81a944ab28c9ca09ef3a8852ae2762e9e6
a29b41e6512a8342bfe4f83dd84f0373
22682915df2ef22ae970eee243ba5b613ecef93a
5e646627947cd7c2c1bb73f66ed567b5c38a0779
32976f9080d795c2bdf19014ca709f7a
9872fa1855844dbf71029ebbc3476b6b
1693b3570a5638cc835b1701f829312793e7d65f
813b3e420281b150a74da2a8da58863e
a80356afd2353fa84a627fc6ab1f80f178fb4d82
db12b76dd005e190d6eea6689b6cd6cb
6f8342b5f98b2ce4612284e68ba472e688e19282
2b3924af79ac0dca41f2b8fe14b78e77
4367e2b929553a2eb1cc2e90047904d9423403e4
fedcded988eb25df8c3ba67c3ea12921
c084f7a8cbe924ee9a5b4c108a8c3236d7f54d63
c465bf98269fea442507eb7ffcdc8fdc
fea7ab7263c5e18d0c9b0e9fdcf3cc5cfecb81a6
ecc6be6ca1063bedb7aff9e6623f0209
9d4567f763c5b1942de2e643bf9ddbf0d8bb3ecd
84d5ed90aa0b793afa13dcf4e141a74eb6286931
e862ccd2fc908c46ac9c159abd68a2ee
17230252037684cec7bc1b6ca23cd5e9b868e7af
fd0e6997805dc2c6cb721d6db522a360
d1c7538681645d9cbac84e1b87580c46f4bd05d5
43b95e6fd054e63b7a4979bfa0a4dc39
69f63bb85dc64368914951238fe7756ca93584f4
89e19fc6a15dd7320bd93c9bc7ef46ec
f1fb22b9290a407010e4a4d4cd2bb6fd9059ef12
6d3b848b5bc4ca83639f4f7ed3e4eb75efb086f2
14554f6119ebd375feb4ef0089a670a0
ceff4a47b314db46426e71bc4f6384df
3aa6f9e8c2baeb9e8c9ac423fc43dcd668d53cb9
009e937813e1f28c009fe946fa159f7f
37bccf48fec392ca17aee29c2a4ec7ebed01c9d3
01208ea766986f64ffbea75ae17de1d1
1cfa00625aa1a84363c295f10733f8c169312322
43cac5a6bc531be9826ecdacf044b765
60d7bc08efcdd1c015efaf95b02f5c230ba7c994
22c9de590562f6bdbfa248dfbba20f7c
ee04a0a19bcdf50373b29a30fda09c2633504ef3
8ecc484257ae8468af43388093dfa175
081f49863a824d991140d3da72390c2014a8923f
b90503551fda8f1ab95a76e7f8149605
a67fea524f10e006df58cfa208fb52b65d113717
18fd3ec88030c5ffb37aeb54f1bd1322
1a7dcaea093f803bd55575cc997e1b5a36ab873d
4c6b2bfeb10e1a26b94187e14d78fe3c
80ba5462d48826b1bdc025f068369bde
3557e81ab77e6dab38a5e6b61efcf82d74aa0212
7dd5dec0d4c609cd6f2f31415bdab5b0
161c64965d28069b0c13bf0cc4c5fa86
aef263b81911242398880a753db31d5b
2b87f151d81277bdd340e675662f3c0c3d4c67e2
2c7cdfda9df2afaa354bfbc6df896c56
c16f6aa656a9c446c24bbd92bda5f4631e961688
3ea11816df22748f89fb06651ed22d86
764de6ce910cf81789be49bf987a165987f4df11
09796083d7e09c46854aa898897ac9ff
d1a891ec68ae09c5eebd0cfa0aaee8fd4917972b
0c0b3c171453307cf971c44b1d344a21
370f934b12fb7eb46d9446190dae8fb3ffefa812
afc81fa5faabfc3e1af52270179e87d7
1447127a94c08fa49a8645b40a594494
bf239d408a02e25c47e460820fa45c5320d9dea4
eb64dd308fb6f52256cfcb7782bbe288
16ef91a12eb92a08b607fe2c0db6250bfed7d636
9b1f15a8d283ecea3c8c05cacd7504a5
053ce922b80eb656bdc268745f93c614c798249d
ff9755ff6f2ff1d937d3008cd0e550aa
c3821758f05d34e902d1f273b0ebd1196928aa10
34fd4426009af0ec2cdc449898b619ea
8cb1371b941da0380ae20b72d0b3ed8af44645ab
72153d6d33f2e56ca06e9d7d5aab9e32
84cabf3807d56e395dbcd0705a1a087158bfe13e
16c3236702a0d754bd455a1f13ee3b7a
7170fe14ab5242b23704256e7fe081ba3b339f48
1d348b23ec4e3356fc98cae5ce2932fc
bcbdab8e2ec0408e7e8aca7b8f518270
b02036304c3b92c492ea30d3c5565eda6d3a6333
a2eba293d9594e6e8bdd0dc3f514594f
e830f46662c88392f9854441b1bec23ab2e89d11
cc0dc17a6b2af49b02304715cefff6eb
7651 F20101221_AAAEAA roy_v_Page_14thm.jpg
9761d2e535f47c19144038213472ffba
8eace225d306c700811dce13eea4aefda1f27d4e
300ddb7a2dc807a3ae96b97dc2f34911
5b81fd07d77ed1c76f1080e9030f32e5743cb196
835187ab785f43c463b3d70ee3dc7943
4d13d6402da1cf9932bfd061f5588bddcfc4514a
35463 F20101221_AAAEAB roy_v_Page_15.QC.jpg
f78967247bd64b02bcc059421e1593fa
0e4f57c1b4c3d90e68452d005c5c78c7382b5fc9
9be9a4f94dbf2e868083961a5320f914
6f39804986d7b2dc0d3031ff5786d120aa92f8ea
34354 F20101221_AAAEAC roy_v_Page_16.QC.jpg
6543cda96d7ce9964d842211e9828d09
44a5a50da123d3fb4dd2a91ffcc8aa9e
5133184fffb1c41c86136e257945b5008e22b80a
a1bbdb39e37d4b5e3c3035e25aa281b5
6c0433bf6d4a977c727a5a44a75c6fe518d23e7f
85b09989fedebdc58caba6b6258bd08e
722811c3b1a0059d98a1df807e0e979e
fa0704a09c47358f9c4fe7e19a916c4ffc4bb29b
51c42117c5de714be3eb43c48b78940f
919 F20101221_AAAEAE roy_v_Page_17thm.jpg
39b57d934db3ab0712bbfe2fc032520b8bcde4b9
dec53ec4d905a33c9d53d01ed9948696
dcfafbd232665344f744f98ddaa7fb340655947f
0027b938008babbd49e5386df56b1abee2eb7743
21233 F20101221_AAAEAF roy_v_Page_18.QC.jpg
77da661ba46e2b0eea2f735bfb7dd9140a44cf43
e1ed5a905c13f7333c01db7b850a6dc7
df21bcf6bdc5301894becf99ea9101e940f529c6
6d0bf4e691456051a7821df138202eed
8e91d97406e281d077c5bf2071944c769a597f52
5493 F20101221_AAAEAG roy_v_Page_18thm.jpg
11dd85506c648935a565d4daa27ba250
313a4a0943aaf9c542273ff8c7b47584e6362160
80dbf3ba42192f07bdb9c42aaed8c862
af8171b1b5c16dbbed24526bed10ebf12b333eb7
6160d7b65ec2c0aa0bf96250725ff916
e719066901a5453425d5ed9da76d3095eec1935d
09fbc7a9445faf5e121a959735071d35
073e7061f77780dc7a4ce92b8a14b9c88e867e47
28103 F20101221_AAAEAH roy_v_Page_19.QC.jpg
d0d601e27c1d128f9ba94ed78f95b5c2
96babb032af9339a5544568d96d27d24a9d5c822
23cf69989ed230fcc8b8fe46b3d61b74
735b28491337802008dd9c4cc1168b9d7f047407
9eb549577f50a85ea0fe9e27e061dd4a
cf10fa68c42e1417580a3d21407b62ae27225f3f
799ff6d2f33d018be64de2ff8c4028c4
252c2d705c6e3d7c610a1298f681618a1c3682fc
27427 F20101221_AAAEAI roy_v_Page_20.QC.jpg
224e545a24d9e81a3677a16e6b76cfe6
949e6dba906e387bdd85f2be7004475dcc5a5e2e
b67fc7da170df44fb3d13cda87129229
abb18a8b9fb5f553eedf927948e0e1193f4c1e12
31ca251c695d8a293cf27724c3d49560
9e65f657dfb016dba3ba16b04a24deb6dd0a39ba
7070 F20101221_AAAEAJ roy_v_Page_20thm.jpg
6cbe319f5028fa7d78a587ff8d3ce05f
62cd8f2685943b2484fe4274252779a8cb6ce48e
118f25a924d836769da82e221b054c4d
200ca94e796a7f0e9d7aab47fa5978502fbd04ec
2b860cc06bb861956101f9a0bda30862
2ba462e706bfa08333b0870231612dbfe2940b5c
32049 F20101221_AAAEAK roy_v_Page_21.QC.jpg
81721662ee5621aa0df12c628144d4ca
b8e3db55ccf849dd940df059b7c080602f5ce278
3dd877432a7997e97869eb215f68536041f41ffe
5ee5ec117475843cd51d44f69e3690e9
3b9974854539dd0a13a86a4f6f8aab7a9a379334
8495 F20101221_AAAEAL roy_v_Page_22thm.jpg
79f5ef3070ea78df4b2e601165dd5c4f
885868c54f25fac73034a8c893a102d3284a1092
fcefd5096da0d0bae7ea4c6aa3511823
4f601d75323d5c8f9f007b32480ab1107f48b62e
4bb32cf1f35576b8d93ec402d11c9d45
23186 F20101221_AAAEBA roy_v_Page_31.QC.jpg
fb1fe74e50e25178e8094d076d06e183
4aac7b86945ff1c5c79ffd692cd97b7490edda6c
19620 F20101221_AAAEAM roy_v_Page_23.QC.jpg
687296b32bdff944e27fce6810657435
d2165e0d242b4f1434075a2bcf1d1182e3cc0410
df27905884d53e2595e28ce96082a154
86cc7ca30f86d33994d69dd831fc0df5e6986efc
82c990f863fc9924ef43185cd070209c
15f851813822e363437e19e2588756883736ae5a
4678 F20101221_AAAEAN roy_v_Page_23thm.jpg
aaa7718679a2d451d22b8580d6e645af
9f68186c22c0e35ccd9f6533d0020f8863461c10
bb3bb67713263481fb82d9592004d56e
6d47a85d4a4b07154715638bbd74485610289096
fb6e1c07dc0780761dfa06038dcdbbde
6426 F20101221_AAAEBB roy_v_Page_31thm.jpg
7c74aa06eabb9d5ebe3524c36bf1c9ba94374f15
29963 F20101221_AAAEAO roy_v_Page_24.QC.jpg
964356d84349422f04bb4ce24e0d3fe1
ce092c8a7be2521f924c03cfeca65a50484b3a48
0b92f2d17e0cd6c483b0df1c4d1aa4af
f8a939b0960b0b9341b3e7846ba64cc70d43a500
2044cd67c0123406e2b747444bb6cdfa
a5dc585615f3271f5ce3a46fc7dab38708556b12
6270 F20101221_AAAEBC roy_v_Page_32thm.jpg
2d7f5c44d161d342a087fd77457e91ce
31343 F20101221_AAAEAP roy_v_Page_25.QC.jpg
2a8a7f37a108b553157c53b2a290d84d
e4d5a391d48f62bc8161fffa930d6a3b67450b31
6847ea326e1903e997c16b241f017c2a
e335ef593ce0c76febc83a07aa7e0c05ddb15315
17378 F20101221_AAAEBD roy_v_Page_33.QC.jpg
873d7599a5173a8d7ca6c420bfeb6cab
27e6868b42c01401a87a542d2bd7210fe97d7da8
7558 F20101221_AAAEAQ roy_v_Page_25thm.jpg
eec1eed95389f6c3a119c55bcc9c9a4a
5d36b885017b555c362e51c552953231df6f1d6e
d6b09a727abdf9f0b4fc0c8c613b0653
ce86bd38c060e9bdbb70e6ec9eaeffddc266467f
9bfe23fb978de7d3e720b265ba0a3271
3159b7a15ec7360cc3266d24b66a2a6b95e994d0
4897 F20101221_AAAEBE roy_v_Page_33thm.jpg
41286175ef00c5b15c8df17158ec1c27
728f4b638851a712b2f6a8de854e465f1194b105
32032 F20101221_AAAEAR roy_v_Page_26.QC.jpg
b7e3e5f5477b0dc21da26bc8bc472a4c
ae143097cb608b00a20765ca396350059942ae62
d077f82ecbf6a5b93ce9b46832355e48
5aaa03c46ab800792d639e7dd0154acba73e5ee0
5dac65b67a98e58fa9d1a7304d72403f
011f6296e351385eca376d0259563bb5e82f55b7
5168 F20101221_AAAEBF roy_v_Page_34thm.jpg
3a6724cb8f616d149234f2ac2070a685
46487a032940a3a1118a58e2695bb397ed9351bc
7861 F20101221_AAAEAS roy_v_Page_26thm.jpg
9cda4e691f20153b39e78bcb7f74411a
23606c41eac30ac64e902ec678f17abb
9ca7bc7d33cf4c1df59661558cf1c80bffbd133d
862904fb83bc845b09bd0059a464f1ec
471be599a127895412caea41ccf19a968f1a4d98
32507 F20101221_AAAEBG roy_v_Page_35.QC.jpg
1681fc055ef5bf0a3b0092d06153f32a
46909894fe92c5b4870f62d6b46948ee97e02751
e339ae8ea363a8700207e728d1aeb17e
28154 F20101221_AAAEAT roy_v_Page_27.QC.jpg
d25826f3039c01b0487dc7c96f36db45
6f1d3dac39c8bca4727de2ec037030fcbb519873
30a1776d6bfdd165409e96d919ce66e8
2b4e2e304444ea0161df0082041397b146da63eb
a116b5d6ef022b3cb9f6013ba356db66
db222d3b3ef90872011a6166cab385792ce02d34
8011 F20101221_AAAEBH roy_v_Page_35thm.jpg
c349ec940b56ff437c230f6bddf7da6a
3d3a49e85755a743f299edf1973d32b17767ccfd
fba33857be850e80f7aebc8b8e5575f6
5abd402609a604f60203ba06da914b07b33cfbe7
7002 F20101221_AAAEAU roy_v_Page_27thm.jpg
bc5a7f37bf003e7103343cd36798d5b1
215d0cd11b13344ab291befeb2e4b154a68e4400
e24fc973093993e43ea3d684c3ef1f05
4f963312fdd1c4975f64eacff5869ba2e11d8875
90ff8e3dc8838613e126cc8fa8efea9a
035ebebdc2f87ef1e50bebb0226026c6a79b3730
31050 F20101221_AAAEBI roy_v_Page_36.QC.jpg
f82fd8e7cbb7e593c2d9e18cbb017d1a
0a7069e9bd76e35e4cafda39272e920c667140a9
6a15aaa7a927fa2f6887735f67897267
8664db4237b7fbcb824e47eee1e9aaed4415b8b6
28130 F20101221_AAAEAV roy_v_Page_28.QC.jpg
ba9fa000051f092fc019f72602e389b9
f7557f36d78278b92a808f86e403c4af76290abb
a37d47207f3830c9d5d926eb609f08ff
626d2201380e0dd353893c966273a164a531f48f
7477 F20101221_AAAEBJ roy_v_Page_36thm.jpg
6667bd220dfc8a5e6fcc3c63483bdcf3
295b2080e23d119caac7cc5288263e13c84e7370
9a511c0633147ca34abc882ffe7c2423
8d7816e5b2d87e75eaeaf6c60eed275053d54b69
7144 F20101221_AAAEAW roy_v_Page_28thm.jpg
68d66b3b31861c0c518c9a5f057f1b0f
9d6067707c0bd84e21d421626412a6f9
7db7ce39bec2b1e9a71d29bb89fc6a779f778646
F20101221_AAAEBK roy_v_Page_37thm.jpg
fec54fe32d4b5ea5897db304fe44636c
1848b76c284a202ba0397e35a7a2cd54c0882c2c
5bc439bd130a47aa1df0e2b81811a79d
d13ce47f6a97616a373c86fed5abdba798aacb36
31669 F20101221_AAAEAX roy_v_Page_29.QC.jpg
fe98a80402ab2c286ea679a37315e1f3
6542f820be473dfc043479c36a96809f4cbc8537
ecd550b1c9059691f55bb13e9424b738
05a1f626179dd40e210ae1ef4064a7128ea874d4
6412 F20101221_AAAECA roy_v_Page_45thm.jpg
2b6675c208d71fed6086727078c17bd6
9c5d45f1c07c4a296f8f6ce87cc16e0b9cfb6411
24117 F20101221_AAAEBL roy_v_Page_38.QC.jpg
a6cd37c6f6d0da677490f70df3b08f55
4d3163313299176a67aa8a416913a575f63a7d78
7937 F20101221_AAAEAY roy_v_Page_29thm.jpg
89a6f58ea6aa21102de31166fa9846e0
e6831bc63b8d33203f9371cbcd894eba864c7185
25862 F20101221_AAAECB roy_v_Page_46.QC.jpg
bf4ddd5e38826e6d8188e8ea7e9cc493
0ebc78504d8cf9f7ceb21857d3efbaa0de07e2f3
6499 F20101221_AAAEBM roy_v_Page_38thm.jpg
a3203cf08fd8032285a7742e2f315887
b6389b264f0f9e0cae7db7e9327719ed7713c0f5
6ff513c50c567e7b02dda54e2931eb74
d1435ac22dba56326d47107e3a2f94a464016257
2e4a5cbb0606feb6feb32376a4683420
5df209b74cc49ee7676115d1f980734e619c3a81
29918 F20101221_AAAEBN roy_v_Page_39.QC.jpg
53d30beb5b2f41ab71c0c1aa9b6bca21
30c68a92e97aabcc6afae75df938c7f97148e766
20098 F20101221_AAAEAZ roy_v_Page_30.QC.jpg
d8b47208c5a66ce85663ed1649839025
8cc7c2b7cf5ebe4dbb9aa23fa657acc62e5b4db9
65b2fd053da1041197ca013be1e0ed76
01ac0dfab1882b6b7223507f2c74029a54981efb
6474 F20101221_AAAECC roy_v_Page_46thm.jpg
bc07c127e0ab4881a0d46b8c93051bef
f35d6c66af74e4a99f446b179f98b94fd70d3cc4
7120 F20101221_AAAEBO roy_v_Page_39thm.jpg
a88225d56da84eb2b78cd6a634a90a4e
911e20a3d78ec6566230d3de91662a05302aafba
8908085ed35739d59d10eb920a119bab7d34163f
202cc8c9fcef6a23d3c9579e541b94ed276b46fd
33275 F20101221_AAAECD roy_v_Page_47.QC.jpg
9f6a3f2d3547bf6a9d79859eab9c033a
7f6480c521e0c4ce1fbf7f8a5dfe7a79505e3c51
26561 F20101221_AAAEBP roy_v_Page_40.QC.jpg
98f354f504897e234e356cb3664214d8
b127e2d1d551b20de26ab5fc49636a43027f0a9d
9f5ffe906ae3ce477192472c2f345002
4873ebc98ba70fa6903f107b4a90549af7cd0477
7847 F20101221_AAAECE roy_v_Page_47thm.jpg
f573447850962d8df9cb02bc99899bb2
322f17e763007391bcedbb115815d1004aff9142
6813 F20101221_AAAEBQ roy_v_Page_40thm.jpg
8358136961de3e2652e585324613a986
931e7d9c37c6aa1f4ee255c8738b3bb85c939b25
65368a8ea645acb23851c74ef88869dc
6fcc0d7b38c570f00a42235e8a8a11c3d7c88b37
cace2b6eb3a6a150b9a315e2f309f574
e029aa0be20de0ed49f5782fe28abb96fe0b8813
27087 F20101221_AAAECF roy_v_Page_48.QC.jpg
a42b58a3136dd0f71b0e3085792ec34134609e61
24979 F20101221_AAAEBR roy_v_Page_41.QC.jpg
e33dbf4944eb0300f899f6bfeec0aa27
05ffae42e2f248d83b1b179e141516aa28528af0
807d3da82872a78f48034715b3cc3347a8dd74d5
4c912441c670b56ba1a14332206e36966087fb3c
6915 F20101221_AAAECG roy_v_Page_48thm.jpg
506d1df45e2d551dcb49af2c8d84444a
72ef490b12128ca06fd29b3f6898d737ceb0eee6
22b9e21c8bc23c9f45293956b094a484
502ce46b8de7aa4244ebd083757cde9ec0a7a7c1
6199 F20101221_AAAEBS roy_v_Page_41thm.jpg
f7d34d8602dac134e8a9cb499fbc39f4
eb132bf915c787f235c4cf4aa47c3e59e197f0d5
4871c07bcb9584cfaf474a848baa1c65
d19d7fafb3c72bd2b4e84a3ffff443a658e98b08
00786d4a1ae0a9e899abdc39b5971d37
51587fc37da13eb1a5588b0633dc7f2f6bb5cc3d
33205 F20101221_AAAECH roy_v_Page_49.QC.jpg
d46018a0362a6b104d14dfac87466f7c
f3a92dc57329ae696cf0a54df9670e30d236cf46
9753441aef6634d784f717dfb4c970a6
362b495979c3ab258ba0e7255e78989410ff094c
12703 F20101221_AAAEBT roy_v_Page_42.QC.jpg
74d7147713c0fb530e2607222cdb12a5
7e35847a7d122beb1c5c926b9cef5f4ca3cae9c8
a12aefce94eb80deaf71508ac6211566
153f3db0920c03282decdf54806f07b5fa89fb23
1a901fc7f25cc9d83b0c8edf249117c81e410919
7799 F20101221_AAAECI roy_v_Page_49thm.jpg
1f0f83494bdffbf1a5b2ee404cc667d4
28f3c2a8160d5156371dc878f7f01255
e22432912d07464482a1bd3cd664249702790bff
3335 F20101221_AAAEBU roy_v_Page_42thm.jpg
c0ff1d5e3ebdbea90e2335cc557f6800
28b0bfac1db6e5fac0d3da6d2d363ce504c2d8a8
b4d5a445669b02ed1e0f530689aa10da
e3038e77361223b40201b1579408bce47e960fd3
32547 F20101221_AAAECJ roy_v_Page_50.QC.jpg
e1fac6106bd9884800e01b1a748ee594
2b69f01da36f6bf00af294d3200590767580ebc8
91af8059d928f9a0f64d3a0e73bf403a
33c3d74eeab6df9c90544dcf08a54cd3d81e28d9
28980 F20101221_AAAEBV roy_v_Page_43.QC.jpg
1a137e78af6b8beaf523b78911d76ac1
69758622e73597ccb09e0d00aa916455529938e0
91f2eaab0777cbecc23ec94d31acc0ff
bbee0f3170b5f12ca868d505f68561feee5cb421
14319 F20101221_AAAECK roy_v_Page_51.QC.jpg
f4a012ee8f7e99e3d00165c76b6e1e82
5a41b9ea4e7c6f4eff1dab03d6f38e41c44a2d62
7010 F20101221_AAAEBW roy_v_Page_43thm.jpg
8fd8784354c33235a5d288262c24ab26
11165c6bf721be4f195517de659a7e0b
7a2b9b2cce31d72791146e1b728e7ab5c22b674f
4061 F20101221_AAAECL roy_v_Page_51thm.jpg
16dbcc9d0950795e26b4603054ace05b
fcb65d323d147015fd0d6d07a1521d33c3322d89
8794bdc54ee069064bba17f399a8fecb46d7d2bc
29483 F20101221_AAAEBX roy_v_Page_44.QC.jpg
dcb135e286cc3109e16d264df44b7fc8
9e58b16340a472b1f9128b19363c25abf723591d
129d8d1790a62685f9fc1b62d76480cf
fb9862e978e0cbc9af9bb852106bb13e28f92fee
28647 F20101221_AAAEDA roy_v_Page_59.QC.jpg
a06ac9143c22fdc1e2703ee72376395f
259f24ab74251351b2f858899692fa2703e882ef
24989 F20101221_AAAECM roy_v_Page_52.QC.jpg
9e9268a51483552848818aa948f3286a
cef66ca3b7435c714e19bb67e2291723e687f733
8351eb33da5657ac53b493129973faaf
518f3342481ce6c4224268a5e1ed9e4e26f5321f
7394 F20101221_AAAEBY roy_v_Page_44thm.jpg
c1ae9b410f7d4f8939f336cc828783c1
2c64c41f00465b3d68784f0f0df6d2344f508d93
fca73e0e65d356e5074faf8640b8ee7b
e5e3ee479817733959f2b8507aec32ff6dc2ab66
7074 F20101221_AAAEDB roy_v_Page_59thm.jpg
da80285d612e55736a3e4f28c23ac97d
fbc95976621fb831aa0d908ecd7116b1056b9446
6595 F20101221_AAAECN roy_v_Page_52thm.jpg
2e2d85102795ebb90b71d37d89e32864
3dcbb4180ab7267529135ebe323ec010
4024032084b80c4a7e8e2db5842385d3c82cf09a
24589 F20101221_AAAEBZ roy_v_Page_45.QC.jpg
f06e2dd5d621251684b643e411961595
b7b9508430852262215b020ddf082bd3a67ef509
af29287fc736fee19c8ce05d835125b1
ba7807461f34a981d5c0ffe74b7212680eb7d1a3
6391 F20101221_AAAEDC roy_v_Page_60thm.jpg
3df352611f05bc42c3c2263416b93f70
0715ec2e79d565a1cb5d3c2b948a92dcc1e6a1ed
24956 F20101221_AAAECO roy_v_Page_53.QC.jpg
0c7dceef40bce324e86d4589734e2d11
a80466f433eb4e24df9c0d4e5f707b61f2e97e82
667f5b48ab04bb47885a5c4e918c88b9190be964
f432802c773217123c25c0a265322d55
1d064d7aec667881476b664e31d588e591ff93a2
6067 F20101221_AAAECP roy_v_Page_53thm.jpg
458736897b2e4074b4950382ae53b794
ae206cd1c1f1d6f5ef2493614701420f2f7dafdd
87245dc8ea97d1e7981a2f8b2f6a5eb1
579022bde1b122fce6ff7c99cc7c361137543326
8ed97f971a3174b34df35760225286a7
442419864263ca3c9a24555fdc35f4bc9a0caafc
28449 F20101221_AAAEDD roy_v_Page_61.QC.jpg
72a784d7ff174312ae23812f09f8fc641e9f0573
24608 F20101221_AAAECQ roy_v_Page_54.QC.jpg
af4fda1bc821aea563c25664c39f51ff
19631b02c2ccf60a3160cfa49285bd9881226b48
6eccffbac5709f0ce03a1b5ff52e9544
c3823bb83790d076aa5a6f2ff1001ee7265316cf
fec04435eaa1c2fcb8a5e8acb6c15d57
574731715c5be44de1835a82a31d26aac6e1a7d0
6963 F20101221_AAAEDE roy_v_Page_61thm.jpg
c8225ecd1acd8669fccca27f2ccceb2b
ef5af0ce83fea977814a2b55c9f331b4248225f7
6933 F20101221_AAAECR roy_v_Page_54thm.jpg
0b245f8b2ac7e8db31e2598dc5dc0d17
30acf315f13d07343606ab24ee2b6f384c742c35
90cda251ed0530a7619fb060d2b07d91
18973 F20101221_AAAEDF roy_v_Page_62.QC.jpg
c6476fbbab271fee401945232e1ebf67
b2d5088a4d55b149ff2e1d1a7edf8d56aa76261b
fb0cdc0a707d4dfda8f89c54ce0c8ec9
9445fc63b1bc85f096e30fe7bb66f24d2f3ce669
29650 F20101221_AAAECS roy_v_Page_55.QC.jpg
9517771642227d963205f33d02112a4c
e7d0173eba8246aeec66efc4d36397874eb11c0b
4868f4494cabeeb339f03aaeae3c8a8d
2e3581789cc940fa620bd5f20bda6438d120f38a
07bff11aec4877f2b8ca2aff00667e1bb5ccb700
5793 F20101221_AAAEDG roy_v_Page_62thm.jpg
ec53b4d54147baac989e86b0c00ab559
2036a473f176902fc9027d63468a73a128fc66c4
4c4706b04297e0c63714986fefe8941631077968
7605 F20101221_AAAECT roy_v_Page_55thm.jpg
7f88cb0a97a799df94d63f2eef6c26ab
64fecd68c42f4aa43803b4ec4886ca236a298510
24cc916edae8b7bc36bc4a461fe331d2
e47f5c8ea221d1d34dc48628908f6fe73990d36b
bea7112070a4b937e858b1d5661427f2
307cfa9d3e2e43b88d45e5e7b2a4d25bc600c514
23660 F20101221_AAAEDH roy_v_Page_63.QC.jpg
63097c06bd1638a05a88bebdc30e58845e797c57
550ac11091a036bba687372b10dae3c5
79dbe0a464bca2b9bc52cfdb9a6696c92501157a
21439 F20101221_AAAECU roy_v_Page_56.QC.jpg
7bc75f88b69f3a7d772ee43412d6e2bbea8dc6aa
cb6885ba8bfeb7ddb1c7ac5643a85b26
146b4bd3fe83c5f5c7df74053012c27bc5001573
9e6aa38abeaf6ba3a3a9c5ccc47a1d65
45cb725046505d9bcaa777364dce347aa1da69ce
6459 F20101221_AAAEDI roy_v_Page_63thm.jpg
e3e6debdb50a2be10ffcf2ba9c0bd7d6
29ca382b2758a9be9741781f4094c498130738e8
874a2e4fbfeeb812b6ecfa37d0fddd4d
ee43103db1c2ec763778e6d642ce5950e833218a
5932 F20101221_AAAECV roy_v_Page_56thm.jpg
e9f5572122321eacdfd3156b66133355
fcaf78a9d3f5c83fea6fca44ee599804c2a3a24c
a0e97c41b78575ed598c0b3e274ab1c9
ff98b3b3ec34bfac28c38293eaeea5f7a1393e27
18382 F20101221_AAAEDJ roy_v_Page_64.QC.jpg
3711b16e5006c7f8324060431ce788e4
6e8fd3bc0faa348393a89f082deef8fda0e9418c
7357f1072e00d7d9907378499f6e74fa
66d0f237c52bc339d682a4a530ca06f1ae1d2643
24027 F20101221_AAAECW roy_v_Page_57.QC.jpg
131d7fa0a610fd1c484710d5ac1ea81b
5cc069a5a847cf2eb3e7364e33b73fa2b35f27c9
fdd209a0488119a46235a8fd56056bfccfe23d6d
5295 F20101221_AAAEDK roy_v_Page_64thm.jpg
61b7a5c5b8574c7ee491e07723805451
7a2aaceafe357daececbb64eff2269b96e10845a
5cbaf1a5c102112ec143f5c802a289c0
a3f24c4e51655dc0f490f2e54b52d0fd07a4a0ea
6235 F20101221_AAAECX roy_v_Page_57thm.jpg
9b948753867acce257828940f5e151a5
033fa4a4cc9cb05c7bece79ff71610957c34dc8f
9a57d542e9786c7ec139bedcb77b934e
dcb8edeb86bc3ab5e9003c87d40725b274f51d97
7194 F20101221_AAAEEA roy_v_Page_73thm.jpg
5763d13b2f4d6bddc3b7ae16f03e1e83
8d5ca13a52d3b73c84ec582fefcbc7c8ece8695a
22057 F20101221_AAAEDL roy_v_Page_65.QC.jpg
cb84b98e9ea45ba46a073de6484877b9
bcf52a62b1c81d13bd63a47fc14717d162845d25
857711c19948c9dd030aa15d8b97e96f
a92a1cd21209f05a1bd837f5201b96d22438c4c7
28378 F20101221_AAAECY roy_v_Page_58.QC.jpg
8895e8edf7a50b0d294f7b939a6ca6be
2cac2739cb88176e0b17263d97ff5e04089e23dc
afd2b146bc1b7436292e485c147c7eef
8e33943e8d54133c900fb2fa4833ae798907a244
23227 F20101221_AAAEEB roy_v_Page_74.QC.jpg
f5e6c81a4dd24e674d34a43c02bc2e80
4975460be156b4536a37b9a09c0a3accf8e46830
20986 F20101221_AAAEDM roy_v_Page_66.QC.jpg
d34bfcca21b32cd18d1eac6499477fcc
43062bd51585b4d79da95885ee83274ab878c457
a2f315d0507267957d12586250bd6e31
2caee55e613bf83cde221f50a055a36996c7f4d5
6743 F20101221_AAAECZ roy_v_Page_58thm.jpg
647751ae469be95cc57f658c325a96fe
e94f4dcba6d9ab31e2f05ac2c54d0837d493bae9
4f7032cbc13a522657133e23397bbb0d
5d5e76944c6b410ee1b582fdbc7807690ffa2014
5993 F20101221_AAAEEC roy_v_Page_74thm.jpg
7f102e452bc9c9a76dd5ba93a51ec883
dcce7ba3c6a659e7311cb2084052914d0630b6ff
5756 F20101221_AAAEDN roy_v_Page_66thm.jpg
5445002616a7ec33aaaca5bd9f80860d
ff9e1fa422c80b854ed85bcdf23cc3bcde1d8fca
217bd308b01f71cba2e944d918696e7c
bb77d32fa7edf95fca9571a6c4031f4e7e6c6176
fb5292c5726e0fc1bc49bc3c7c29130f
7328bc57243bc1678a58c39bc9c9c0925e838dc5
18784 F20101221_AAAEED roy_v_Page_75.QC.jpg
742daea90a0cd9cf12bc8d285ed8bd59
6cde61f08acfbd8a9597734ed293cfe03cf7b58d
29197 F20101221_AAAEDO roy_v_Page_67.QC.jpg
cdf9268153dcaaef6af3de9963f5f5e2
d4f42dbab7f3edce8ac28012faa20b3a
96e7ed7eab642916b99dd47603995486c2e422a1
7521 F20101221_AAAEDP roy_v_Page_67thm.jpg
0eb4e091ca858f0de254a02968a7fdca
7b5279317b40491351b1dd208e0a25b5e6869373
4463801bdce80e7bccc4fc14337b676f
4ea8bab54c333be450c359f9a14b64dcbc4f2168
4dc549d9e339f28e6a412dc239dc1ee8
c7e2a3bb6835fd4207cdeac2ba5ef19504cb2d67
7316 F20101221_AAAEEE roy_v_Page_76thm.jpg
c9aa505c434a2711a700035f1a85192b
cf56d6fea8c3a98771acd04cef68474e0ea0a811
21434 F20101221_AAAEDQ roy_v_Page_68.QC.jpg
fd8e9f5a07ae37693d8b1807443af472
8ca16e045d227177f51acb908d462122d61a0f5f
ffb2ff8c579e553c2d5a33e34531bb76
f32305e4a60f3dd373afaa80186f347b
2d150f9f8c28e451778d57afec59701019bf9353
21561 F20101221_AAAEEF roy_v_Page_77.QC.jpg
8fa65b60da7032aae30284ed4326bb97347611fe
6124 F20101221_AAAEDR roy_v_Page_68thm.jpg
5d2c733fde49719695a844c69878e6ed4127d886
49445be5d6d7cda367dd716b6ee23950
b1bc41ab938de4f827e34f2237af1411b7ed1b92
5521 F20101221_AAAEEG roy_v_Page_77thm.jpg
e74d05a2bcbd710e9fec64ccb7e34936
6217a9645b83fb781d4c233bd035ec2740d56b3e
24975 F20101221_AAAEDS roy_v_Page_69.QC.jpg
15a3216390f4918be3ba746f49aa662c
1db0a034b3440082ac12df9a6f7680b49b8be7f8
e9a44a71834769e8d70170c8848c7f04
67a9f1b6cf8161db496b073a4c2073946a906dac
b4bde950266b5bcf3d895fcd1931f2a4
036cd103e70673f7050611d0d59fe9e2268c6d64
25043 F20101221_AAAEEH roy_v_Page_78.QC.jpg
2fd92c87a505c6cb2634c9a59a53eb85
a090165f8846d3669b90c179b38b47df77188f56
99273e0d38e9e868debbd7d0ab4c3e32
43aa4f668e3d9ceb975278b935592f8365568242
6173 F20101221_AAAEDT roy_v_Page_69thm.jpg
d11579a8a6fa776356048243452e1c0ef3a2ccc1
bd9efd5a8241defe24a934f284607bf3
0d2078dac0a0b24b739725092e99bec39d378f19
774dc7d91d0ae76959a771cbe14b4336
e7be99c703580baaba5194ee155f1a503ee824ff
6566 F20101221_AAAEEI roy_v_Page_78thm.jpg
2c0b9a07fd1d7ee146600ec4dcbd1682
c4eab3c6724aeccc0a6d242871327114df2cfba7
e342a9157914af7b17792e81bce2e154
292d0b23ec8a19702a987a0fe23e85bb311b5dc7
3958 F20101221_AAAEDU roy_v_Page_70thm.jpg
d7b43c84a9c8d36ff70227d6ce05e3c0
85660beb65029e02f043a640951a05d323fa5827
25188 F20101221_AAAEEJ roy_v_Page_79.QC.jpg
d64641deeb70a4c376e607670fd10c4a
bb904cefe4bb71ce3c85e23290afd2a35dbf667f
4d8546cbee92ec035a46a4510617b3ee9495e97e
27535 F20101221_AAAEDV roy_v_Page_71.QC.jpg
e98119abe5a36be66ee18b75ac8d0d0e
715c3674b551c74930160ec0eb841bbbc26f55dc
944691a9c381bc4bb43f9cd40e4175eb
053678eaa72c87ee1fca62d58aee173be4c52603
6480 F20101221_AAAEEK roy_v_Page_79thm.jpg
c6d2ccda6fb02a9ca75a3ec66e4a9d88
a267177fb0c4cb417f1deeb04ea313c5
4d4bf85ef2f7d5076613b65987382941d935e885
6932 F20101221_AAAEDW roy_v_Page_71thm.jpg
a6070e349492a621824a4f7f2a57cfda
e7380488ec04e21bc790887961534bfc19823d20
2c0804ec2cf5f61a14d9773cc31b6691
1779368a6043a261d46d5e30beb9f4f7f6a17793
23446 F20101221_AAAEFA roy_v_Page_89.QC.jpg
d35a59bce34daccf3b02944907d85ebe
252114f4b8d796c01d6fb23342d70e607ef70343
25696 F20101221_AAAEEL roy_v_Page_80.QC.jpg
f23f66179813966228f88e97fcfe157b
08a0a80054a9520fc6c3eaf768604a3304748aee
16367eebd21e3b63bc4163421e8eca98
17f9346c25663742f4e16a16715b25433dc58b9b
20153 F20101221_AAAEDX roy_v_Page_72.QC.jpg
ddf147f1b32fe43e5679368c436764b9
b685b0a9c553f5caf61aa9ed9d2c7dbd61dfe8ce
a69e9bd1b55a4599354aaaff41627f0e
32512 F20101221_AAAEFB roy_v_Page_90.QC.jpg
f93ace798b2bedc95618fdd80ae4a132
84e1169aa6779eef329e607a69d815543a9b1a30
28090 F20101221_AAAEEM roy_v_Page_81.QC.jpg
72a9fc14605d32aedb41c27978d8080c
9bce2ea53ac1e3e480143d57aa3c102e5a86c836
d4b44a31bcb563309f73871e4d3e6c43
35d1ef215d29c715307f898a9933c68fa023f348
5537 F20101221_AAAEDY roy_v_Page_72thm.jpg
b5079079157741a7a00c898d2dc5e83c
7821 F20101221_AAAEFC roy_v_Page_90thm.jpg
ffc10c26cbd871c488bdf9d01fed7ae5
ba6f0c4708904c258e22a678b9bd4e059e004dbb
6788 F20101221_AAAEEN roy_v_Page_81thm.jpg
288eb84c50f0c087b1dc88bea64b526e
d588c3af8e534dd1da8ba04951e947d4a1a913a5
bd9c08306d990c734ee2d00c869b78be82fb3742
28868 F20101221_AAAEDZ roy_v_Page_73.QC.jpg
e85664a4f8118fef9972150dc4a7ddea
6e2698d46e31402808d74be652f0d64322ff9f60
da096f430885ecb9660a66baf9e34e57
32680 F20101221_AAAEFD roy_v_Page_91.QC.jpg
417b7c954e1f18b64b87a631c0ee7c01ea9572a6
33136 F20101221_AAAEEO roy_v_Page_82.QC.jpg
8ca3c4abb6756125b5134b4bfb8e55e1
2932477992b3da278c79b126f559a9b1fddd047a
64f3d5e28bafc04de5f5a482eb2a086c
0e8b3a9139a21149c1c5b52b0672258ed8745654
33251 F20101221_AAAEFE roy_v_Page_92.QC.jpg
b569d355665d5b334ecb7c3c4ea857e3
462291e7be5a77b32a5080d7051bdaf986a50716
7740 F20101221_AAAEEP roy_v_Page_82thm.jpg
53a5e80e065e72e4dfa8527f7cbb1dba
75cebfa1a37c5b648ee85dbb49840cda549d8f88
9a180f8ea6bdbd9035167327c11aff0d
2bb0f6aef855824b8bb611cb39cd4d4f485fde0e
7bcb5929627ceea5d709e1e43fd2e214c538265b
23826 F20101221_AAAEEQ roy_v_Page_83.QC.jpg
8b7e977023bdfce6ccc0750c0b8367c0
8996b6c29607be81758eff3063f55095eae2813b
0b88b6222443a0dfa2a81f30da964abb
25aa8d90c42d95036c0f775cbee3fa7945942135
77b065a7130d1b271639b50ed65f6915
28dde5263d6eea12c7bb6596f848e3ee8b9981d0
8039 F20101221_AAAEFF roy_v_Page_92thm.jpg
24334f1c8b16b86079f71776727d278f
150a51226e60d3dd556cc6d8b4cfe167ee4b0111
7154 F20101221_AAAEER roy_v_Page_83thm.jpg
59ddf8063f45ee9326407568abdfe230
c15a482cc97a4fe5580b962e8e4eeaaa13a94e21
4e089bd3035c873cfff1fcc0a71af794
56f00deb016c63df357021b080244e465932279f
6cbaf2724cedf44804f621ef2c740953
69703fc867145e62ea64451f436f97eb53eecbdf
13624 F20101221_AAAEFG roy_v_Page_93.QC.jpg
8b483cee874bab5d806f177db749c0a65171112d
27463 F20101221_AAAEES roy_v_Page_84.QC.jpg
773d8f26abc99fc8e6524cd07b950b29
0cf997d0093f8f1a739151666e68ea1c5ac12fcc
27d2f1a906c03207af4a81de5329a987
a0f2732cd4c676aaa3249548530fd2b7fd9b3c4c
3468 F20101221_AAAEFH roy_v_Page_93thm.jpg
f54ea616ee6808db064344bd3b4760ea
f15120f10792a27731b198f6fe2f5b03e82fa426
6691 F20101221_AAAEET roy_v_Page_84thm.jpg
a50fd5a338dee86a630ef64eb408465c
ca3a77690fd621967446cf0471a7dd1fa13884f2
4dd1ae5bdf920c6ed45b00f0c9cf36ed
2809 F20101221_AAAEFI roy_v_Page_94thm.jpg
e65aba80025044b818c9f227a594a981
30072 F20101221_AAAEEU roy_v_Page_85.QC.jpg
3eb659fc00cbaeb92b4098d7c748378d
e904da0bd92acce442bab43c53542cea397281ea
09f1b316829187d3aa96b9c59ce26fd0
108073 F20101221_AAAEFJ UFE0022377_00001.mets
ff93ba8767a79635f953e3ee95b7c79c
c3c573cdc4a2d79cf966ff2aff7dee8ccb5c36b7
7345 F20101221_AAAEEV roy_v_Page_85thm.jpg
23c7dc6078035a2bc13087364d71bb0c
53d4843bb9502058e5662e5695f5d1c466f844f7
4a9966db16b7e4e1716cb705ceb110cd
685bb27d70780b34aab5eb45172a98dc3f688828
22345 F20101221_AAAEEW roy_v_Page_86.QC.jpg
098f207f86f0df19dffee20269e823bc
30045941a82f6430ccf4af19e4816c2b8bbd0124
560a5226740e5beaed2d4d7913354860
d8f2d1b067948c633d55b152228dd9ed7170e866
6129 F20101221_AAAEEX roy_v_Page_86thm.jpg
12b9bd1c426fa6edd576502ee1f2dfce4b650d4e
8815caaa1dd984f9564ef030b70ece69
7f0aec99ff4feb0ce863a6c7aa9faba36f427e41
5961 F20101221_AAAEEY roy_v_Page_87thm.jpg
b3d60899f41ca13ca4f57fa1b5596dda
5430d41c63d22051849d000f7ec666a1f081a2b0
455abbe4c9b8b48393c635e7e1a9f85528c60dc5
4028 F20101221_AAAEEZ roy_v_Page_88thm.jpg
1192a0530c676beab0e5e6e99b502cc0
986359eaef3d74d5d12fca21a382feba
2c07f2d50510b5a096c23b48956ac37db11a7644
f7bbacb08d14f5d5062b2d216a96fd89
ef422df3deb321a3a0678cd88c131aa4d7974e5e
929b4034e103bdafd39737d50e145a3a
91f1e84113684d43da22c62d0d07de14df457a05
e1951448a7a503085fd039a8b18d6a6c
4dc623ce72f0bce3b86b0c76b42a9cabf8c5fff1
f0e5ec3cb1cfe4f1844d903015b14ede
2ac5053b548d370faa3f8bfa942e604660347a5c
ecdcd993bcb58735eb4af4f28c6d7a79
a6bcea07a00e766ef13de5eb3f6647b3531f4ac4
56ddf78b8f5b17a00b72952fc8f575f4
1718e964a18d4cc4743a52e80870860edebda55b
d2cab3a0049ce6417d3ba017baf9dd2a819cf65a
08c1e45c4e0bccc9303bb903a8a828db
fa4726d7473ee381ea22a330a1c747976133f9fb
dab75a1ea49a897afae783f3d65dd6d7
63ea3178f9ce6e754ac618322e3eae60212c9569
9c7905cec83ab936f624f4967214454d
1f544a089afef595d5c94c5ee1b6059fe7662f37
06e58a07328ae10983c20110f1a273c3
1877570d9f303da1c0c6165dd448510b5714516b
0c2d03f5a0b6e6db03f639640831d13a
9ca793ffb4414f900c48566bde9599d1
44e1129d92c409dacf2302cffcdb9808f091b8ea
8d94d5038887885e095e0590677b62ae
6b712caf0d1791c41b1eabe0f1b1f8249ed754db
ab0e63a4be9b380e1ea6935a475bd85a
87ce2bc3964a3dccc663b40acce7be9995027ec1
6850150c60810f463a1ff5290c7148d8
406c70a0b2ffb15454436f36be14e1a5f6ec742b
60bf2be058404928c0f9b594e9624880
3b1af72d63b7fe8b4633cc7721ddeca4
159438c3412b246a443f6a27f36396bf8f98ced6
375dc1d53d7ff3ef9c1158275a6494e2
285b3753d3c6f9a12148081dbb00cbceec530019
ab3281cfee01764597a0eba001cd743d
70a950b32fa08478a433e769bb8b307e53876eb4
cf3f4fd4c624c86674abd2cfc5a1c0a5
22311d9b0011a220c8c79b76ef8a440010239277
5eef71058e0e824daaa5924118e1dd34
3159402a1119b7aa07736e91af41fa6f57c5b6b6
332d773b41424d1bcfc3f47c3e6df8f8
69035333eb8af68dfec7252667981b35ce6b256b
1b041254f35dc1de81115215fc8f659e
3cb2d24b46bec3ea831f8d1ccbf6a75e00d99391
4b026268dc44490f1c85b09c8314f389
9bdb2c0e581d94bdd73252afbdd30d24ded80c70
da8d03c99e88d6653541fd9455c87048
020ce34b57b13a6a1d2ed07d64c4f386
e9a0da6252d87f7e810362428982d89104185dd6
414665cc0f81f0401846fd91c9e9fdc8
5040a1b213d83864aa07834889041b3b41087504
295e46ccf6a83fb6be4e66f70d4e69b7
2a3ffaf65954f0eaa58d45466ac379683aebf363
4c9a2bb70ab1bfd7e9963ed733fe544e
b51555c7ab4fc4cddffe1884708f0046f0e97e09
b60a47a94c7dd5949be4701b18b4924e
5c56183ae86005e6c4ae5f40542365cf092fa7e9
7033d284b8cfb5fd3443fd500d3d240c
fec67d0ba58a8571d7eb786f0399852181679e83
af796cbe9dbed4afec5fd5ab708de0d9
ff69dcfbda74f7aed3037e22833a7c8e1abd69d6
4b6c344a5fa1dce1b46f8dc110648db1
6846fa205929f7a760a76101d9835bbbd3f3cde7
cdfbd7421ef686fa4c3691b71ce937cd
5c4d3df82c195a931baf9461b59cde2de61b8ea8
4ebe210c80f87e93080eee6fb0a4c819
849ff5509583518d881b96a9595774746c88723c
864980a1cc9c16054c5fc18488b3bd17
2f46c507a765199e8303ee3690d8fc0602edd04f
f594328b711281d33aab440733d89ff7
0e4293a37de8f1861fd81fc94cc30ec53b9dae4a
a16fa01c2c33855a9aefa13c9d67b719e035ec6a
cc99e1f891608aea3036e87b5a37ccd5
43c5d49779af108f42594911ca4e65af514dd388
8230f7f4aff6769602667a61cc7282ee
12ea6ba1e679d1a638b7385940e5ecbaf562fb0f
b496e099d34893be6a8184d7ae925552
3c93812c67615cc5520b37100c0d0c3c016b4af3
7b4c6a0069ea59ddb586db614f362cf6
2c5cf0a801c78d21631c78d2e84b99674e8ef212
e77d1c4fa050af6266ec21325b6a8b11
6c7feca2a94b6c5f3dd132379e13e05258e2a2f2
376662feb411d6a8035d9bef376fa02c
1d7794d97f279cc0123f1d177f755b4f184bfa1e
e2f3a2dc15c0a9ed1943b2d4209bb700
0fd63cb7dc948020758489decfc8f5a7e8c920be
9ded33a854ab23251b6c6200e229d3de
9d7d3bf61f0eab1fff02f42dc3bf0818cb47c0d3
eee145e554d6e2fa69f2338fd59bd5b5
840f624c245affda95f4628bed0ccf00b09a683d
7a8fdb5c72bb8cd35dea87b5e05fc1f3
e5c4306005995b65b6714e984ceda00bf04204fe
6575ea79d918417315695af784323f82
4641bf25efcd7247fc00613bcfa91b08e9e93d4a
0cbf6aaa3435e01ac8c2ec7dfe7c50a6
d2c07e4fb528edaf07f424c72a6b8dedfd84db27
a884d8805f62ddfa1aa602a318d2ed5b
889e58145bef61eba3ddec3d07d1365e
21af0c222cb518c0ed740c988c11c78e5ed73187
906e0e75a13c619be5e6bddb2ee99528
9739ca8d3eedcae2a859767ecf43ff5990742d3e
d22966a024e5dee685fb9b50eeab05e8
c7586b61013d5b0c68e8879c6bf5070f864846af
1e0566cdd4f88f0c0cfdfeee3dc0fc43
b245a5a176b0f4e89d0f06dc0eb0d58029a913f2
0a15dfbec6d02568c30afe2c28dcaf88
453f4f5f8bab4e612919fdeaeb0a22eb
42846781f2856da5ae922fe0c1d1cd8a9959956c
b15c77a9e68b7d7ab7e13947cdbe9cb7
fa388d9b2246590f2425f334f2d654cc
a01e360c3082d4820be6d87233ff6bb7ff123060
658bfc693151d41e8a5fd16059c9125b
6afdb395dd0612c6d3e2d8013928be758969ba68
THEORETICAL AND METHODOLOGICAL DEVELOPMENTS FOR MARK(OV
CHAIN MONTE CARLO ALGORITHMS FOR BAYESIAN REGRESSION
By
VIVEK(ANANDA ROY
A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY
UNIVERSITY OF FLORIDA
2008
S2008 Vivekananda Roy
To my parents
ACKNOWLEDGMENTS
I extend my sincerest thanks to my advisor .Jint Hohert for his guidance throughout
my graduate study at University of Florida. I feel fortunate to have .Jint as my PhD
advisor. His guidance, help, enthusiasm, were all crucial to making this thesis take its
current shape. I'ni deeply grateful to hint for many other things, not the least for his
inspiring words in my hours of need.
I would also like to thank Professors Ben Bolker, Hani Doss and Brett Presnell for
agreeing to serve on my coninittee. I am particularly grateful to Professors Hani Doss and
Brett Presnell for being so kind to me over the past five years. I learned not only statistics
but also a lot about Emacs, LaTex and R front them.
I would also like to thank Professors Bob Dorazio, Malay Ghosh and Andrew
Rosalsky for sparing a lot of their valuable time on academic discussions with me and
giving me advice on several issues. I thank all my teachers from school, college and Indian
Statistical Institute whose dedication to teaching and quest for knowledge have inspired
me to pursue higher study.
Special thanks go to All .Ilv:adi and Parag for their friendship, care and support. I
have learned a lot about life in past five years front both of them. I owe deep gratitude to
.Jethinia whose care and affection I will never forget. I am thankful to Shuva whose love
and enthusiasm for niathentatics have ahr-l- .- inspired me.
I am indebted to many other people, mostly front my village, who guided me and
encouraged me during the formative years of my life: Arunda, Arunkaku, Bapida,
Bonikeshjethu, Budhujethu, Shashankajethu and my uncle.
Finally, I would like to thank my parents for ahr-l-~ .14eing a driving force in my life.
I often feel that whatever I have achieved is only due to nly parents' sacrifice, hard work
and honesty.
page
ACK(NOWLEDGMENTS .......... . .. .. 4
LIST OF TABLES ......... ... . 6
ABSTRACT ............ .......... .. 7
CHAPTER
1 INTRODUCTION ......... ... .. 9
2 MARKOGV CHAIN BACKGROUND . ..... .. 18
3 BAYESIAN PROBIT REGRESSION . ..... .. 24
3.1 Introduction .. .. ... . .. ..... .. 24
3.2 Geometric Convergence and CLTs for the AC Algorithm .. .. .. 27
3.3 Comparing the AC and PX-DA Algorithms .... .. 35
3.4 Consistent Estimators of Asymptotic Variances via Regeneration .. .. 37
4 BAYESIAN MULTIVARIATE REGRESSION .. .. .. .. 48
4.1 Introduction ......... . .. .. 48
4.2 Proof of Posterior Propriety ........ ... .. 50
4.3 The Algorithms ......... . .. 58
4.3.1 Data Augmentation ....... .. .. 58
4.3.2 Haar PX-DA Algorithm . ..... .. .. 60
4.4 Geometric Ergodicity of the Algorithms ... .. .. .. 61
5 SPECTRAL THEOREM AND ORDERING OF MARKOGV CHAINS .. .. 71
5.1 Spectral Theory for Normal Operators .... ... 71
5.2 Application of Spectral Theory to Markov C'!s I!us< ... .. .. 81
APPENDIX: CHEN AND SHAO'S CONDITIONS ... .. .. 89
REFERENCES ......._._.. ........_._.. 90
BIOGRAPHICAL SK(ETCH ....._._. .. .. 94
LIST OF TABLES
Table
page
3-1 Results based on R = 100 regfenerations ...... .. . 47
Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy
THEORETICAL AND METHODOLOGICAL DEVELOPMENTS FOR MARK(OV
CHAIN MONTE CARLO ALGORITHMS FOR BAYESIAN REGRESSION
By
Vivekananda Roy
August 2008
C'I I!r: James P. Hohert
Major: Statistics
I develop theoretical and methodological results for Markov chain Monte Carlo
(jl\C'l\C) algorithms for two different B li-. -1 Ia regression models. First, I consider a
profit regression problem in which }*\,...,Y*, are independent Bernoulli random variables
such that Pr(}* = 1) = #(:rf#) where :rs is a p-dimensional vector of known covariates
associated with }*, /3 is a p-dimensional vector of unknown regression coefficients and
#(-) denotes the standard normal distribution function. I study two frequently used
MC1| C algorithms for exploring the intractable posterior density that results when the
profit regression likelihood is combined with a flat prior on 73. These algorithms are
Albert and Chih's data augmentation algorithm and Liu and Wu's PX-DA algorithm.
I prove that both of these algorithms converge at a geometric rate, which ensures the
existence of central limit theorems (CLTs) for ergodic averages under a second moment
condition. While these two algorithms are essentially equivalent in terms of computational
complexity, I show that the PX-DA algorithm is theoretically more efficient in the sense
that the .I-i-ini!!lle'~ variance in the CLT under the PX-DA algorithm is no larger than
that under Albert and Chih's algorithm. A simple, consistent estimator of the .I-i-inia..'l~e
variance in the CLT is constructed using regeneration. As an illustration, I apply my
results to van Dyk and Meng's lupus data. In this particular example, the estimated
.I-i-inidul'lic relative efficiency of the PX-DA algorithm with respect to Albert and Chih's
algorithm is about 65, which demonstrates that huge gains in efficiency are possible by
using PX-DA.
Second, I consider multivariate regression models where the distribution of the errors
is a scale mixture of normals. Let xr denote the posterior density that results when the
likelihood of n observations from the corresponding regression model is combined with
the standard non-informative prior. I provide necessary and sufficient condition for the
propriety of the posterior distribution, xr. I develop two 1\C1| C algorithms that can he
used to explore the intractable density xr. These algorithms are the data augmentation
algorithm and the Haar PX-DA algorithm. I compare the two algorithms in terms of
efficiency ordering. I establish drift and minorization conditions to study the convergence
rates of these algorithms.
CHAPTER 1
INTRODUCTION
Realistic statistical modeling often leads to a complex, high-dimensional model that
precludes analytical, closed-form calculation which is required for statistical inference and
prediction. If we combine the complex model with a prior distribution on the unknown
parameters, as is done in B li-o -1 .Is Statistical analysis, the result is typically an intractable
posterior distribution of the model parameters given the observations. Suppose xr(0|y)
is the posterior density of the p x 1 vector of unknown model parameters, 8, given
the observations, y. In B li-, -i Ia inference, we are often interested in evaluating the
expectation of some function, my- f, with respect to the posterior density xr, i.e., we want
to know
Ex f= f 0)x(|y) O .(1-1)
Because the density xr(0|y) is a complicated function, closed-form calculation of the
above integral is generally impossible. We assume that the above integral exists and is
finite. Since Ex f can not be evaluated analytically, we use either deterministic numerical
integration techniques or simulation based methods to get an approximate value of (1-1).
Before delving into these computational methods, we provide two motivating examples.
In both of these examples, statistical modelling results in an intractable posterior density
making explicit closed-form calculation of the corresponding posterior expectations
impossible.
In problems involving toxicity tests and '.1. .-- li experiments, the responses are
often binary since what is observed is whether the subject is dead or whether a tumor
has appeared. A popular method of analyzing binary data is through B li-o -1 .Is analysis
with a probit link function. Suppose that we observe n independent Bernoulli random
variables, Yi,...,Y,, and we assume that Pr( = 1) = #(xf ) where x4 is ap xn 1 vectonr of
known covariates associated with ~, is a p x 1 vector of unknown regression coefficients
and #(-) denotes the standard normal distribution function. For ye {0(, 1}", that is,
Y = (Y1,... ,ys~) and y, E {0,1}), we have
i= 1
If we use a flat prior on p, the marginal density of the data takes the form
It is not obvious whether cy (y) < 00. We address this issue in chapter 3. Assuming
city) < 00, the posterior density of p takes the following form
i= 1
Clearly, the posterior density xr(P | y) is too complicated to allow explicit closed-form
calculation of posterior expectations of functions of P.
It has long been known that heavy-tailed error distributions are often required
when modelling financial data. Specific scale mixtures of normal distributions can be
used for modelling heavy-tailed data. Our second example is a B li- Ion multivariate
regression model where the distribution of the errors is a scale mixture of normals.
Suppose Y1, Y2, *, n are d-dimensional random vectors (e.g. returns on some assets)
satisfying the linear regression model
S= PXi + Ei
where p is the k x d matrix of unknown regression coefficients, xi's are k x 1 vectors of
known explanatory variables and we assume that, conditional on the positive definite
matrix E, the d-variate error vectorS E1, ,E are independently and identically
distributed with common density
fH6d E)= exp E ETZ-1E dUH(5
o(2xr) |ECa
where H(-) is the distribution function of a non-negative random variable. The density,
fH, clearly, iS a multivariate scale mixture of normals. The density fH can be made
heavy-tailed by choosing H appropriately.
We can rewrite the above regression model as
Y = Xp +E
where Y = (Yi, .. ., Y,)T is the a x d matrix of observations, X = (xl, xa2, *, )T iS the
a x k matrix of covariates and E = (E, .. En) iS the a x d matrix of error variables. The
likelihood function for this regression model is given by
f y|, ) (x)|E exp 6 -0x 2 H6
If we consider the standard noninformative prior on (P, E), i.e., if we assume that the prior
d~l
density, xr(P, E) oc |E| 2 the posterior density takes the following form
C2 1 x "
C2(y i i=1 0~ (2xr) |E|-I 2
where c2 9) is the marginal density of y given by
d(d+1)
where W CR 2W is the set of d x d positive definite matrices. In chapter 4, we
provide necessary and sufficient conditions for c2 9) < OO. As in the previous example,
posterior expectations with respect to the posterior density, xr(P, E | y), are not available in
closed-form.
We now discuss different computational methods that can be used to approximate
(1-1). These computational methods are broadly of two types, namely, numerical
integration methods and simulation based methods. If the dimension, p, is not large,
numerical integration techniques can be efficiently used to obtain a good approximation of
(1-1). But, as p increases, numerical integration techniques become less and less efficient
because of the well known problem called the curse of I.:Is:.: ,:r.......lU;, In this dissertation,
we consider simulation based methods to estimate the posterior expectations.
An alternative to numerical integration is to estimate (1-1) by Monte Carlo sampling.
Monte Carlo integration requires drawing iid samples X*,X*,...,XA_, from xr(-) and
then using the sample mean
m-1
fm f(X J ),
j=0
to estimate the expectation (population mean) in (1-1). The justification of Monte Carlo
methods comes from the strong law of large numbers (SLLN~), which guanrantees that fm
converges almost surely to Ex f as m tends to infinity. So, Ex f can be well approximated
by fm provided the sample size, m71, is large enough. We often know r only up to its
normalizing constant, i.e., usually, ,, xr(0|y) de is unknown. In that case, we can use
rejection sampling methods Robert and Casella [38, C'!s Ilter 2.3] to obtain an iid sample
from xr. Rather than giving details about different Monte Carlo methods, we now address
an important issue that the experimenters ahr-l- .- face -i.e., how to choose the agi
sample size, m?
How large a sample size is sufficient is a subjective matter. It depends on how much
error we are willing to accept in the approximation. One way to measure the accuracy
in the approximation is by the width of a 95' confidence interval for Ex f. A confidence
interval for Ex f can be obtained using the central limit theorem (CLT) for the estimator
fm. If f has a finite second moment with respect to r, i.e., if Exf2 < 00, then by the
classical central limit theorem we have
fl7, Ex f 1 N(0,v2) aS n 4 OO
where v2 = Exf2 (Exrf)2. An .l-i-mptotic 95' confidence interval for Ex f is given by
Im + 2sm,/~ where s,, is a~ strongly con~sistent, estim~a~tor of v2" giVen? by
m-1
i=0
Since the sample size, m, is under our control, the main benefit of calculating the
confidence interval is to determine whether the Monte Carlo sample size we choose is
large enough. In practice, one draws a random sample of size M~ from xr for some finite
n~umber M! anld constructs th~e confidence interval fMI + 28M/2il. If the Ilength? of the
resulting confidence interval, 4sM/ lz, Seems to be satisfactory, then one stops the
sampling and reports fM RS an eStimate of Ex f. On the contrary, if the .I-i-mptotic 95' .
confidence interval is deemed too wide, then M~ can be increased appropriately and further
simulation can be carried out until the desired level of accuracy is achieved. Of course,
in the latter case, if we know beforehand the precision, e, that we want to achieve then
we can use SM/ RS a pilot estimate of v to calculate an approximate sample size, namely,
(28M @l~2, that we need.
In practice, making iid draws from xr might not be feasible. For example, in the probit
regression model that we mentioned, it is difficult to make iid draws from the posterior
density, xr(Ply), especially when the dimension, p, is large. Similarly, for the multivariate
regression model that we discussed, it is problematic to produce a useful Monte Carlo
method to simulate from the posterior density, xr(P, Ely).
Surprisingly, it is straightforward to construct a Markov chain with stationary
distribution xr even when direct simulation from xr is impossible. As explained in the next
paragraph, it turns out that it is indeed possible to approximate (1-1) by simulating a
Markov chain with stationary distribution xr. This is the basic principle of Markov chain
Monte Carlo (ifi'lC) method. The most general algorithm for producing Markov chains
with arbitrary stationary distribution xr is the Metropolis-Hastings (il-H) algorithm. A
simple introduction to the M-H algorithm is given in Chib and Greenberg [7]. Another
widely used MCijlC algorithm is the Gibbs Sampler [4]. Suppose the p-dimensional vector
8 in (1-1) can be written as 8 = (01, 82, ). The simplest Gibbs sampler (but, not
the general Gibbs sampler) requires one to be able to simulate from all univariate full
conditional densities of xr i.e., it is required to simulate from the conditional distributions,
s| {0;, j / i} for i = 1, 2,..., p. It is also possible to create a hybrid algorithm which uses
different versions of M-H algorithm together with Gibbs sampler to construct a Markov
chain with stationary distribution xr. As our discussion so-----~ -r- there is a plethora
of Markov chains with stationary distribution xr. In order to choose between MCijlC
algorithms, we need an ordering of Markov chains having the same stationary distribution
xr. In C'!s Ilters 2 and 5, we describe different partial orderings of Markov chains.
Let {Xjy}Ro denote the Markov chain associated with an MCijLC algorithm that is
used to explore xr. If {Xjy}R is Harris ergodic (defined in C'!s Ilter 2), the ergodic theorem
implies that, no matter what the distribution of the starting value, Xo,
m-1
fm := ~f(X,)
j=0
is a strongly consistent estimator of Ex f, i.e., fm Ex f almost surely as m c o. So,
the ergodic theorem (like the SLLN in iid case) ensures that Ex f can be approximated
by running a well behaved Markov chain for sufficiently large number of iterations. In
practice, as in iid case, one simulates the chain for a finite number of iterations, ;?i M~',
and reports fM;r aS the estimate of Ex f. Suppose there is an associated central limit
theorem (CLT) given by
fm x f (0, .2) S 0 00 (1-)
and that we have a consistent estimator of o.2, Call it ~2. So, we can compute an
.I-i-inidlli'lc standard error for fM;r, Which is given by &/ ~l. As in the iid case, the
.I-i-inidlli'lc 95'. confidence interval given by fMr & 29-/21 can then be used to decide
whether there is any need for further simulation.
As -II_0-r-- -1. in the previous paragraph, establishing a central limit theorem for a
Markov chain is essential in order to put MC10L on equal footing with iid sampling.
Unfortunately, unlike in classical Monte Carlo methods, the finite second moment
condition i.e., E, f2 < 00 does not insure a CLT for f,~. In addition, the Harris ergodicity
which establishes the strong consistency of foz is not enough to guarantee that (1-2)
holds. It generally requires rigorous analysis of the Markov chain {Xj;}Ro in order to
prove that CLT holds for f,,. There are several v- .--s Of establishing the CLT in (1-2).
These approaches can he broadly divided into two categories. One approach is based on
probabilistic (convergence rate) aI, l1i--;-; of the Markov chain. We give a brief description
of these techniques in C'!s Ilter 2. The other approach exploits results from functional
analysis (see Chapter 5).
Another difficulty in constructing the confidence interval, fn/r & 2o-/ W1 is that even
when there is a CLT, finding a simple, consistent estimator of the .I-i-mptotic variance,
0.2, can he challenging due to the dependence among the random variables in the Markov
chain. Mykland, Tierney, and Yu [:33] show that when CLT exists, regenerative simulation
(R S) methods can he used to construct a consistent estimator of o.2 by uncovering the
regenerative properties of the Markov chain (Section :3.4). The regenerative simulation
technique basically breaks the whole Markov chain up into iid pieces (tours) by keeping
track of the regeneration times. Then, standard iid theory can he used to analyze the
.I-i-ini!!lle'~ behavior of the ergodic average, f,,2 and thus a simple, consistent estimator
of .l-i-!Injdull- variance is obtained. It might not he easy to implement the RS method in
practice. There are other methods like batch mean and spectral methods which are easier
to employ to estimate the .I-i-inidull-; 1 variance (Jones et al. [21] and the references cited
therein). The advantage of using RS method is that it is on stronger theoretical footing
than the other methods.
We now provide a brief overview of the four remaining chapters of this dissertation.
In the next chapter, we review some results from general state space Markov chain
theory. In particular, we mention sufficient conditions for Markov chain CLT and provide
a partial ordering of Markov chains based on their performance in the central limit
theorem.
In (I Ilpter 3, we study two MC \!C algorithms that are frequently used for exploring
the posterior density, xr(Ply), that we mentioned before in the context of the probit
regression example. These algorithms are Albert and Chib's [1993] data augmentation
algorithm and Liu and Wu's [1999] PX-DA algorithm. We study the convergence rate
of these algorithms and prove the existence of central limit theorems (CLTs) for ergodic
averages under a second moment condition. We compare these two algorithms and show
that the PX-DA algorithm should ah-- .--s be used since it is more efficient than the
other algorithm in the sense of having smaller .I-i-mptotic variance in the central limit
theorem (CLT). A simple, consistent estimator of the .I-i-inidllicl~ variance in the CLT is
constructed using regenerative simulation methods.
In C'!s Ilter 4, we consider B li-, -i Ia multivariate regression models where the
distribution of the errors is a scale mixture of normals. We noticed before that if
the standard noninformative prior is used on the parameters (P, E), then posterior
expectations with respect to the corresponding posterior density, xr(P, Ely), are not
available in closed-form. We develop two MC1| C algorithms that can be used to explore
the density xr(P, Ely). These algorithms are the data augmentation algorithm and the
Haar PX-DA algorithm. We compare the two algorithms and study their converge rates.
We also provide necessary and sufficient conditions for the propriety of the posterior
density, xr(P, Ely).
While in ChI Ilpters 3 and 4, we used probabilistic techniques to 2.1, lli-. .. different
MC1| C algorithms, it is possible to take a functional analytic approach to study and
compare different Markov chains. In ChI Ilpter 5, we give a brief overview of some results
from functional 2.!! l1i--;- In particular, we discuss the spectral theorem for bounded,
normal operators on Hilbert space. We show how these results of functional analysis can
be used to study Markov chains.
CHAPTER 2
MARK(OV CHAIN BACKGROUND
Let A = {Ami~o denote a time-homogeneous discrete-time Markov chain on a
general state space X equipped with a countably generated o--algebra B(X). Let Pm(x, A)
be the m-step Markov transition function associated with A for m = 1, 2, 3,. .. So,
Pm(x, A) denotes the probability that the Markov chain at x will be in the set A after m
steps (transitions), that is, for x E X, Ae B (X) and le {0(, 1, 2, ...},
Pm(x, A) = Pr(Am+i E A | At = x)
When m = 1, we simply denote the one step Markov transition function by P(x, A) and
for m = 2, 3, .. ., Pm(x, A) is defined iteratively by
Pm~x A) P~, dyPm-(y, A).
A probability measure xr on B(X) is called an invariant probability measure for A, if, for all
measurable sets A,
xr(A) = P,, AxIdx)
Note that,
CP2(x, A)a(dx:) = SP(x.)P, d)P(y, )=~x = (dy)P(y,l A)= (A).
Similarly, we can show that f Pm(x, A) x (dx) = x (A) for m = 1, 2, 3,. .. So, if Ao ~ xr,
then Am ~ xr for all m and A is stath-.: o a,~r in distribution.
Let L2(;,) be the vetor space of real-valued, measurable functions on X that are
square-integrable with respect to xr, i.e.,
The Markov chain A is said to be reversible with respect to xr if for all functions f, g e
L2(r
fC S (yl~gr)P(x, dy)r(dx) = f~S I(x)g(y)P(x, dy)i(dxl).
If we take g(x) = 1 in the above equation, we get
f S~(y)P(x, dy~~h)=XS(dx) =f(x)Ix dy)i(dx) = f l~(x:)xd)
i.e., xr is invariant for A. So, if a Markov chain is reversible with respect to xr then xr is
invariant for the chain.
Suppose that p is a non-trivial, o--finite measure on X. The Markov chain A is
called p-irreducible if for each x E X and each set A with p(A) > 0, there exists an
m EN := {1, 2,. .. }, which may depend on x and A, such that Pm(x, A) > 0. In words,
the chain is p-irreducible if every set with positive p measure is accessible from every
point in the state space. The measure p is called an irr..I;. 09ii;, measure for A. As in
M.~ in and Tweedie [30, Section 4.2], when we ;?i "A is ~-irreducible" we mean that A\
is p--irreducible for some p and that is a maximal irre 7 09ii;, measure for A. Two
properties of maximal irreducibility measures that will be used in the sequel are (i) if p is
an irreducibility measure and is a maximal irreducibility measure, then p is absolutely
continuous with respect to (denoted > p), and (ii) a maximal irreducibility measure is
unique up to equivalence, i.e., if I1 and 2a are both maximal irreducibility measures, then
#1 > #2 and 2a > 1 (denoted 21 a
The p--irreducible Markov chain A is ap~eriodic if there do not exist an integer d > 2
and disjoint subsets Ao, Al,. ., Ad- _c X with p~(Ao) > 0, such that for all i = 0, 1, .. d- 1
and all x E Ai,
P(x, Aj) = 1 for j = i + 1(mod d).
Suppose A is ~-irreducible and define B+(X) = {A E B(X) : ~(A) > 0}. The Markov
chain A is called Harris recurrent if for all Ae B +(X),
Pr (Am EA i.o. | Ao = x) = 1 for all x E X.
The Markov chain A is called Harris ergodic if it is ~-irreducible, periodic and Harris
recurrent. The Harris ergodicity of a Markov chain is often easy to verify in practice and it
implies that, for every x E X,
||Pm(x, -) jT(-) | 0a m co ., ,
where ||Pm(x, -) xr(-)|| denotes the total variation distance between the probability
measures Pm"(x, -) and 7i(-), i.e., the supremumn over measurable A of Pm"(x, A) x(A) .
However, the Harris ergodicity tells us nothing about the rate at which this convergence
takes place. If it takes place at a geometric rate, then A is said to be geome/0.:. ellol ergodic.
More precisely, the Harris ergodic Markov chain A is geometrically ergodic if there exists a
constant p E [0, 1) and a function M : X [0, 00) such that for any x E X and any me N ,
|| P~x,-) -x () | < Mx) m .(2-1)
We now describe methods that are used to prove the geometric ergodicity of a Markov
chain. One method of proving that A is geometrically ergodic is by establishing drift
and minorization conditions. There are several v-wsi~ of doing this (11. i-n and Tweedie
[31], Rosenthal [45], Roberts and Tweedie [40]). Here, we describe a method based on
Rosenthal's [1995] work.
A drift condition holds if for some function V : X [0, 00),
PV < AV + L
for some A E [0, 1) and some L < 00, where (PV)(x) = f, V(y)P(x, dy). The function V is
often called a drift function.
An associated minorization condition holds if for some probability measure Q(-) on
B(X) and some E > 0 we have
P(x, A) > EQ(A) VX E C and VA E B(X)
where C := {x E X : V(x) < l} with I being any number larger than 2L/(1 A).
Rosenthal's [1995] Theorem 12 shows that the above drift and minorization conditions,
together, imply that A is geometrically ergodic. In Chapter 4, we employ drift and
minorization conditions to prove the geometric ergodicity of the data augmentation
algorithm used in B li-, Io multivariate Student's t regression problem.
One advantage of proving geometric ergodicity of A by establishing the above drift
and minorization conditions is that using Rosenthal's [1995] Theorem 12, we also can
calculate an upper bound of M~(x)pm in (2-1). This upper bound can be used to compute
an appropriate burn-in period ( Jones and Hobert [23], Marchev and Hobert [28]). There
are other methods of proving geometric ergodicity of a Markov chain that do not provide
any quantitative bound of M~(x)pm in (2-1). We describe one such method now.
We will assume that X is equipped with a locally compact, separable, metrizable
topology with B(X) as the Borel o--field. A function V : X [0, 00) is said to be
unbounded of compact sets if for every y > 0, the level set {x : V(x) < y} is compact.
The Markov chain A is said to be a Feller chain if, for any open set Oe B (X), P(-, O) is
a lower-semicontinuous function. The following proposition is a special case of M.~ i-n and
Tweedle's [1993] Lemma 15.2.8.
Proposition 1. Sup~pose that the Harris ergodic M~arkov chain A is a Feller chain.
Supplose further that the support of a maximal irrech;.-.l..7. ;, measure has non-tii pl ;
interior. If for some V :X [0, 00) that is unbounded of compact sets
PV < AV + L
for some A E [0, 1) and some L < 00, then the M~arkov chain, A, is geomen,... ellol ergodic.
In C'!s Ilter 3, we apply Proposition 1 to establish geometric ergodicity of MC \!C
algorithms used in B li-, -i Ia probit regression problem. Hobert and G. o;r [15] emploi-. I
Proposition 1 to establish the geometric ergodicity of Gibbs samplers associated with
B li-, -i Ia hierarchical random effects models.
Notice that, unlike Proposition 1, the drift condition in Rosenthal's [1995] Theorem
12 does not require the drift function, V, to be unbounded off compact sets. Also,
Rosenthal's [1995] Theorem 12 does not need A to be a Feller chain.
The driving force behind MCijlC is the ergodic theorem, which is simply a version
of the strong law that holds for well-behaved Markov chains, e.g., Harris ergodic Markov
chains. Indeed, suppose that f : X R I is such that |x Ifldx < 00 and define Exf =
f dx The theergoic teore .v that the average fm = m-] CE f(As) converges
almost surely to Ex f no matter what the distribution of Ao. This justifies our use of fm as
an estimator of Ex f. We will ;?i that there is a CLT for fm if there exists a a2 E (0, 00)
such that, as m 00o,
L(f m Ex f) iN~(0, a"2
As explained in OsI Ilpter 1, CLTs are the basis for .I-i-inidllicl~ standard errors, which can
be used to ascertain how large a sample is required to estimate Ex f. Unfortunately, while
the Harris ergodicity of a Markov chain does imply that the ergodic theorem holds, this
is not enough to guarantee the existence of CLTs. However, if A is geometrically ergodic
and reversible with respect to xr, then the CLT holds for every f such that S, f2d~ o
that is, for every f E L2(;T) [41]. (For more on the CLT in MCil C, see C'I I>. and Geyer
[5], Mira and Geyer [32], Jones [20] and Jones et al. [21].) For a thorough development of
general state space Markov chain theory, see Nummelin [34] and M.~ i-n and Tweedie [30].
Robert and" Rosentha^l [43] provides a concise, self-contained description on general state
space Markov chains (also see Tierney [49]).
As mentioned in OsI Ilpter 1, for a given distribution function, xr, there are large
number of MCil C algorithms with stationary distribution xr. One way to order these
algorithms is based on their performance in CLT. Note that the .I-i-md.l!lle~; variance, O.2
in (1-2) depends both on the function f and the particular MC1| C algorithm that we
are using. Suppose P and Q he the Markov transition functions corresponding to two
different MCijlC algorithms with stationary distribution xr. Let us denote O.2 for these
two algorithms by n( f, P) and v( f, Q) respectively. Assume, both v( f, P) and v( f, Q) are
finite. Then if we are interested in calculating E, f, we prefer the Markov chain P over Q
if v(f, P) < v(f, Q) provided the two chains are equivalent in terms of simulation effort.
On the other hand, if we do not assume any prior knowledge about the function whose
expectation we want to evaluate, we need a uniform ordering as below.
Definition 1. [SE] If P and Q
Markov chains with intericent Igo~~~l.:.:sl..;i measure x., then P is better than Q in the
e~ff~.,.. I ordering written P FE Q. if c(f, P) < 'U(f, Q) for every f E L2(,)
In ChI Ilpter 3 and ChI Ilpter 4, we order different MCijlC algorithms in terms of
efficiency ordering.
CHAPTER 3
BAYESIAN PROBIT REGRESSION
3.1 Introduction
Suppose that Yi,...,Y, are independent Bernoulli random variables such that
Pr(~ = 1) = #(xTP) where xi is a p x 1 vector of known covariates associated with ~, P
is a p x 1 vector of unknown regression coefficients and #(-) denotes the standard normal
distribution function. For ye {(, 1}", that is, y = (yl,..., y,) and yi E {0,1}), we have
i= 1
A popular method of making inferences about P is through a B li-o -1 .I analysis with a flat
prior on p. Define the marginal density of the data as
C'I. in and Shao [6] provide necessary and sufficient conditions on y and {xi}", for
city) < 00 and these conditions are stated explicitly in the Appendix. When these
conditions hold, the posterior density of P is well defined (i.e., proper) and is given by
Unfortunately, the posterior density xr(P | y) is intractable in the sense that expectations
with respect to it, which are required for B li-, -i Ia inference, cannot be computed in closed
form. Moreover, as we mentioned in (I Ilpter 1, classical Monte Carlo methods based
on independent and identically distributed (iid) samples are difficult to apply when the
dimension, p, is large. These difficulties spurred the development of Markov chain Monte
Carlo methods for exploring xr(Ply). The first of these was Albert and Chib's [1993] data
augmentation algorithm, which we now describe.
Let X denote the, \x p design matrix whose ith row is x { and, for z = (zi, ..., z,)T E
RW", let p = P(z) = (XTX)-1XTz. Also, let TN(p, is2, w) denote a normal distribution with
mean p and variance is2 that is truncated to be positive if w = 1 and negative if w = 0.
Albert and Chib's algorithm (henceforth, the "AC als..i s~I lIn~ ) simulates a Markov chain
whose invariant density is xr(P | y). A single iteration uses the current state P to produce
the new state p' through the following two steps:
(i) Draw zi, .., z, independently with ze ~ TN(x'P, 1, yi)
(ii) Draw p' ~ N,( i(z), (XTX)-1)
Albert and Chib [1] has been referenced over 350 times, which shows that the AC
algorithm and its variants have been widely applied and studied.
The PX-DA algorithm of Liu and Wu [27] is a modified version of the AC algorithm
that also simulates a Markov chain whose invariant density is xr(P | y). A single iteration of
the PX-DA algorithm entails the following three steps:
(i) Draw zi, .., z, independently with ze ~ TN(x'P, 1, yi)
(ii)~ ~ ~ ~ 1 Dra g2l ~ am liz x(XI1XI)I; 1Xz)2 and set z' =(z,.,g,
(iii) Draw ii' ~ Np( j(zj), (X X)-1)
Note that the first and third steps of the PX-DA algorithm are the same as the two steps
of the AC algorithm so, no matter what the dimension of P, the difference between the
AC and PX-DA algorithms is just a single draw from the univariate gamma distribution.
For typical values of n and p, the effort required to make this extra univariate draw
is insignificant relative to the total amount of computation needed to perform one
iteration of the AC algorithm. Thus, the two algorithms are basically equivalent from
a computational standpoint. However, Liu and Wu [27] and van Dyk and Meng [51] both
provide considerable empirical evidence that autocorrelations die down much faster under
PX-DA than under AC, which so-----~ -r- that the PX-DA algorithm "mixes f I-I. I than the
AC algorithm. (Liu and Wu [27] also established a theoretical result along these lines see
the proof of our Corollary 1.)
Suppose we require the posterior expectation of f (P) given y, i.e., we want to know
assuming this integral exists and is finite. Let {@}Rj"o denote the Markov chain associated
with either the AC or PX-DA algorithm. We later show in this chapter that {@}Rj"o is
Harris ergfodic. So the ergfodic theorem implies that, no matter what the distribution of
the starting value, Po,
m-1
j=0
is a strongly consistent estimator of E [ f(n) | y] ; that is, fm, E [f ( ) | y] almost surelyv
as m 00o. As defined in C'!s Ilter 2, we ;?i- that there is a CLT for fm if there exists a
0.2 E (0, 00) such that, as m 00o,
ImT, E [ f() | y]) N(0, 02) aS ,n ix OO .-1)
As explained in OsI Ilpter 1, establishing the central limit theorem for fm is crucial to make
honest statistical inference based on {py }Ro. We know that one way to ensure CLT in
(3-1) is by establishing geometric ergodicity of {@}Ro. In this chapter, we prove that the
Markov chains underlying the AC and PX-DA algorithms both converge at a geometric
rate which implies that the CLT in (3 1) holds for every fe L2(7i(lj Iy)); that is, for
every f such that fy, f2(P)xr(PIy)dp < 00. We also establish that PX-DA is theoretically
more efficient than AC in the sense that the .I-i-mptotic variance in the CLT under the
PX-DA algorithm is no larger than that under the AC algorithm. Regenerative methods
are used to construct a simple, consistent estimator of the .I-i-ing d o)tic variance in the CLT.
As an illustration, we apply our results to van Dyk and Meng's [2001] lupus data. In this
particular example, the estimated .I-i-inidllicl~ relative efficiency of the PX-DA algorithm
with respect to the AC algorithm is about 65. Hence, even though the AC and PX-DA
algorithms are essentially equivalent in terms of computational complexity, huge gains in
efficiency are possible by using PX-DA.
The remainder of this chapter is organized as follows. Results that the AC and
PX-DA algorithms are geometrically ergodic appear in Sections 3.2 and 3.3, respectively.
In Section 3.4 we derive results that allow for the consistent estimation of .li~!!llh d ic
variances via regenerative simulation.
3.2 Geometric Convergence and CLTs for the AC Algorithm
We begin with a brief derivation of the AC algorithm. Let RW = (0, 00), R_
(-oo, 0], z = (zi, .., z,)T E R" and let #(v; p, x2) denote the N(p, x2) density function
evaluated at the point ve R Consider the function from RW"x RW" R W given by
x(P,v z )=Is z) )ys a zeIo v)(zi; xf f )
where, as usual, IA (. iS the indicator function of the set A. Note that
i= 1
and hence, xr(P, z | y) can be viewed as a joint density in (p, z) whose marginal is the
target density xr(P | y). This joint density is usually motivated as follows. Let Z1, ..., Z,
be independent random variables with Zi ~ N(xTP, 1). If we define = Ipg (Zi), then
Yi, .. ,Y, are independent Bernoulli random variables with Pr(~ = 1) = # (xT f). The Zi's
can therefore be thought of as latent variables (or missing data) and xr(P, z | y) represents
the posterior density of (p, z) given y under a flat prior on p. The AC algorithm is simply
a data augmentation algorithm (or two-variable Gibbs sampler) based on the joint density
xr(P, z | y). Indeed, a straightforward calculation reveals that
| z y ~ N ( (X'X)X )-1
and conditional on (P y), Z1, .. ,Z, are independent with
Zi | 79, y ~ TN(.r {/, 1, yi).
If we denote the current state of the Markov chain as /9 and the next state as /9', then
the Markov transition density of the AC algorithm is given by
Note that k(P | 79) xr(P I| y) = k(P I | 7') xr(P | y) for all /3, /' E RIW'; i.e., k(P | 79) is reversible
with respect to r(fi | y). It follows ininediately that the posterior density is the invariant
density for the Markov chain, or, in symbols,
Let K(-, ) denote the Markov transition function corresponding to the AC algorithm;
that is, for /9 E RIW and a measurable set A,4
The corresponding m-step Markov transition function is denoted by K'"(ft,4). We now
show that the Markov chain driven by k(fi' | 79) is Harris ergodic.
Let p- denote Lebesgue measure on RIW. Several nice properties follow front the fact
that K(fi, -) has a (strictly positive) density with respect to p. Indeed, if 7t(,) > 0, then
K((P, A) > 0 for all /9 E RIW'; i.e., it is possible to get front any point 79 E RIW to the set ,4 in
one step. This implies that the AC algorithm is ys-irreducible and aperiodic.
In order to establish Harris recurrence, we must introduce the notion of harmonic
functions. A function b : RIW" R is called harmonic for K if h(fi) = (Kh)(fi) for all
/9 E RI>. One method of establishing Harris recurrence is to show that every bounded
harmonic function is constant [34, Theorem 3.8]. Suppose b is a bounded, harmonic
function. Since the AC algorithm is (-irreducible and has an invariant probability
distribution xr(P | y), it is recurrent, which in turn implies that & is constant ~-a.e. [34,
Proposition 3.13]. Thus, there exists a set NV with p(NV) = 0 such that h( ) = c for all
pe N Now, for any pe R P~, we have
which implies that h c. It follows that the AC algorithm is Harris recurrent.
We have now shown that the Markov chain corresponding to the AC algorithm is
Harris ergodic and thus from C'!s Ilter 2 it follows that ergodic theorem holds for it. The
following theorem is the main result of this section.
Theorem 1. The M~arkov chain on RW with transition 1:.: l 0 k (P' | P) (that is, the
Markov chain underlying the AC rlly.>rithm) is geomen,... ellol ergodic.
Proof. We will show that the AC algorithm satisfies the hypothesis of Proposition 1.
We have shown that AC algorithm is p-irreducible and periodic, where p denote the
Lebesgue measure on RP". So, if is a maximal irreducibility measure for the Markov
chain underlying the AC algorithm, then > p. Conversely, if p(A) = 0, then Km(P, A)=
0 for all pe R P? and all me N which implies that ~(A) = 0 and it follows that p > ~.
Hence, pa -. Since the support of p obviously has non-empty interior, it follows that the
support of a maximal irreducibility measure for the AC algorithm has non-empty interior.
We now demonstrate that the Markov chain associated with the AC algorithm is a
Feller chain. Let P and O denote a point and an open set in R ", respectively. Assume
that {#1}"7,, is a deterministicc) sequence in RP~ with p& / such that pt i as
m c o. Two applications of Fatou's Lemma in conjunction with the fact that xr~z|4, y) is
continuous in yield
lim inf K (PA,O) > lim inf k (p'| m ~) dp
= lim inf x( R* | lnz ) z
J to mm
> x~fi' z, y) int inf ~rx~a|7,aU
=K(P O) ,
and hence K(-, ) is a lower-senticontinuous function. Hence, the Markov chain corresponding
to the AC algorithm is a Feller chain.
We apply Proposition 1 with drift function V(fi) = (X/S)T(X/S). Recall that X is
assumed to have full colunin rank, p, and hence XTX is positive definite. Thus, for each
,* > 0, the set
P liE RI'? : V <) I 19=( E RIW : pTXTXp IS <
is compact so the function V is unbounded off compact sets. Now, note that
(KV) (p> ~ i)kf'7)p
= EE [V(:i') r-, ] i?-Y y
where, as the notation elo----- -r- the expectations in the last two lines are with respect
to the conditional densities xr(f' | x, y) and xr~x | 7, y). Recall that xr(f' | x, y) is a
p-dintensional normal density and xr~x | 7, y) is a product of truncated nornials. Evaluating
the inside expectation, we have
E [V(P ') zy] = E [(P ') X X/S' z,y]
=tr(X X(X X)-1) +: XX(X X)-1(X X) (X X)-1X :
=p + X X(X X)-1X :
< p+X z,
where tr(-) denotes trace of a matrix and the inequality follows from the fact that
z I-X X(XX -1TX z>0
for all z E R"n. We now have that
EE ;[V(I') z, ] ;), y < Ep +z z y =p+ E [z: | ,y].
i= 1
Standard results for the truncated normal distribution [19] imply that if U ~ TN((, 1, 1)
then,
E(U2) ~ 2
where #(-) with only a single argument denotes the standard normal density function; that
is, #(v) is equivalent to ~(v; 0, 1). Similarly, if U ~ TN((, 1, 0) then,
E(U2) ~ 2
It follows that
1 +( x pj ) ( if y = 0 .
A more compact way of expressing this is as follows:
SLiE [zf2 | ,y]= 1+(re-i) 4)2 ,/,Tn (3-2)
where I, is defined in the Appendix. Hence, we have
(KVT)(i) = E E[V;(P') zly] i,y) i= i
Recall that the goal is to show that (KV)(P) < AV(P) + L for all PR E W. It follows from
(3-3) that (KV)(0) < p + n. We now concentrate on P eRI \ {0}.
We begin by constructing a partition of the set RW \ {0} using the a hyperplanes
defined by wTP = 0. For a positive integer m, define Nm = {1, 2,..., m}. Let
Al, A2,..., Aan denote all the subsets of No, and, for each j E IT_ define a corresponding
subset of p-dimensional Euclidean space as follows:
Sj = {p3E R"\{0} :wfgT~<0fo rall i EAjand wfg7P >0fo raill i EAj}
where Aj denotes the complement of Aj, that is, Aj = N, \ Aj. Note that
the Sj are disjoint,
U zlSj = Rw \ {0}, and
some of the Sj may be empty.
We now show that if Sj is nonempty, then so are Aj and Aj. Suppose that Sj / 0 and
fix p E Sj. Since the conditions of Proposition 5 are in force, there exist strictly positive
constants {ai }", such that
Therefore ,
aimT + a~lTP + ---+ I, ,tiP = 0. (3-4)
The matrix X has full column rank p, and hence 0 < pTXTXp = CE (xTP)2
E = /ET 72. Thus, there exists an ie N such that wTP / 0 and, since all the ai are
strictly positive, (3-4) implies that there must also exist an i' / i such that I, 4 n r
have opposite signs. Thus, Aj and Aj are both nonempty. Now define C = { je E : T
0}. For each j E C, define
/ieA T n2 ieA.~ zT n2
and
xj = sup Ry (P) E [0, 1].
In the following calculation, we will utilize a couple of facts concerning the so-called
Mill's ratio. First, when n > 0, a (u)/(1 #(u)) > u2 [11, p.175]. Also, it is clear that if
we define
M=sup
ue(-oo,o] 1 (u) '(U
then M~E(0, 00).
Fix j E C. It follows from (3-3) and the results concerning Mills ratio that for all
SE Sj, we have
i=1
< p+ n + (,, )2
1- P)(wP)
ieA ~(~i
iEAj
(KV) (0)
-` (w 4p)2
iEj
(I,.TP)~(WTP)
1-~(WTP)
ie:
iEAj
< p+ n+ (w, P)2 + nM
i=1
=p + ~n(M + 1) + (w 4Tij)2
= xvp +nM+ 1)+R()(L)
where L := p + n(M~ + 1). Therefore, since s~cS,
(K V) (P) < AV (P)+ L ,
where
A := max Xj
jec
Hence, it suffices to show that As < 1 for all j E C.
Again, fix j EC and note that for le R ,, Rj(10)
Rj(P) which means that Rj(P)
depends on P only through p's direction and not on its distance from the origin. Thus,
xj = sup Ry (P = sup Ry (P < sup Ry (P) ,
P6S3 PES3 PES:*"
RIW \ {0}, it follows that
where
Sf* = {p E RP : ||4|| = 1 and wffT < 0 for all i E Aj and wff'i > 0 for all ie Ay } ,
and
Sf* =I {p n:|4|=1adwf o l lA n wf>0frali s
Now sice j*is a compact set in RP" and Rj(P) is a continuous function on Sj**, we know
that
sup R ( ) = Ry ( ) for some p E Sj**
Assume that p E Sj* is sulch that R,( ) =1, that is,
-2 -2
This implies that CE ,l(ll'n O) Again, there exist strictly positive constants
al, a2, to Such that
a~wim + a~wp + --+ a,wT =0o.
But we already, know, that wf#T = 0 for ll i e Aj, and hence it must be the case that
Howecver, w{~i < 0 for a~ll i eA, as a Sf*. This combined with th~e fact thait as a~re all
strictly, positiven shows that w{,T = for all i E Aj. Hence, we have identified a nonzero
such that
ITr = 0 for all ie N,
But this contradicts the fact that W has full column rank. Therefore, we have established
that
sup Ry (P) <1 ,
which implies that As < 1. Therefore, A < 1 and the proof is complete.
Together with the results of Roberts and Rosenthal [41], Theorem 1 implies that
the AC algorithm has a CLT for every f e L2(i( l g. Ill Order to use this theory to
calculate standard errors, we require a consistent estimator of the .-i-mptotic variance, o.2
This topic will be addressed in Section 3.4. In the next section we show that geometric
ergodicity of the AC algorithm implies that of the PX-DA algorithm and that PX-DA is
at least as good as AC in terms of performance in the CLT.
3.3 Comparing the AC and PX-DA Algorithms
The Markov transition density of the PX-DA algorithm can be written as
k*(p'l | )=xp ',yRz z)>z|4 )d
where R(z, dz') is the Markov transition function induced by Step 2 of the algorithm
that takes z z = (gzi,..., gz,)T. It is straightforward to show that the Markov
chain driven by k* is Harris ergodic. Hobert and Marchev [17] provide results that
can be used to compare different data augmentation algorithms (in terms of efficiency
and convergence rate). In order to establish that their results are applicable in our
analysis of k*, we now show that R(z, dz') admits a certain "group representation." Let
"xz | y) = fy, xr(P, z | y) dp. A simple calculation reveals that
|XTX|-4 exp ( z (I H)z/2}
cy~ly (y 2x
where H = X(XTX)-1XT. Let G be the multiplicative group R,+ where group
composition is defined as multiplication; i.e., for gl, g2 E G, gl o g2 = 192. The identity
element is e = 1 and g-l /.Telf-armaueo su~g dg/g where dg
denot~es L~ebesgue: measure: onl R Let~ GJ act on the left of RW" through component-wise
multiplication; that is, if geG c andu z e R", then gz = (gzi,..., gz,). With the left group
action defined in this way, it is easy to see that Lebesgue measure on RW" is relatively left
invariant with multiplier X(9) = 9"; i.e.,
J~S R ("ll J Rild
for all geG c andu all int~egrable functions h : R's R I. (See ChI Ilpters 1 & 2 of Eaton [10]
for background on left group actions and multipliers.) Let Z denote the subset of RW" in
which x lives; i.e., Z is the Cartesian product of n half-lines (RW and RW_), where the ith
component is R,+ if yi = 1 and RW_ if yi = 0. Fix x E Z. It is easy to see that Step 2
of the PX-DA algorithm is equivalent to the transition x i gx where y is drawn from a
distribution:- on G having density function
X(9);g ( y)v4(d9) 9;z-ls(gz y)dyg (c'( H):) ZT- -g' (IHiz/2 d
.fe X(9)~rg ( y)v4(d9) .1 9;z-li(gx | y)dy 2(it-2)/2F(n/2) eg.
Furthermore, S, (g)xr(gx | y);4(dy) is positive for all x E Z and finite for almost all: E Z.
Consequently, we may now appeal to several of the results in Hohert and Marchev [17].
First, their Proposition :3 shows that R(x, dr') is reversible with respect to xr~x | y) and it
follows that k*(f3'|/3) is reversible with respect to xr(P3| y). We now use the fact that the
AC algorithm is geometrically ergodic to establish that the PX-DA algorithm enjoys this
property as well.
Corollary 1. The M~arkov chain on RP" with transition I7:. um / k* (f' | /3) (that ise. the
M~arkov chain underlying the PX-DA rlly~.:,thm) is geomtl,... ril;i ergodic.
Proof. Define
Let K and K'* denote the Mlarkiov operators on L (r(n3 y)) associated with the Markiov
chains underlying the AC and PX-DA algorithms, respectively [25, :32]. Denote the norms
of these operators by ||K|| and ||K*||. In general, a reversible, Harris ergodic Markov
chain is geometrically ergodic if and only if the norm of the associated Markov operator
is less than 1 [41, 44]. By Theorem 1, the AC algorithm is geometrically ergodic and
consequently ||K|| < 1. But LiU and Wu [27] show that ||K*|| < ||K|| [see also 17, Theorem
4] and hence ||K*|| < 1, which implies that the PX-DA algorithm is also geometrically
ergodic. O
We have now shown that the Markov chains underlying the AC and PX-DA
algorithms are both reversible and geometrically ergodic and hence both have CLTs
for all f e L2(r(p |)). We now use another result from Hobert and Marchev [1'7] to show
that the PX-DA algorithm is at least as efficient as the AC algorithm.
Theorem 2. Fix f e L2(P Iy).I a~,ka a~,k d6801 the UGnritRCeS in the CLT f~or
the AC and PX-DA rlly.., /thms,, i~ 1.: 0 .l;, then 0 < o)~,k* a~,k
Proof. The result follows immediately from Hobert and Marchev's [2008] Theorem 4. O
In order to put our theoretical results to use in practice to compute valid .I-noi- nd l'lc
standard errors, we require a consistent estimator of the .I-i-mptotic variance and this is
the subject of the next section.
3.4 Consistent Estimators of Asymptotic Variances via Regeneration
We begin with the AC algorithm. Instead of considering the Markov chain on RW
driven by k(P' | p), we consider the joint chain on RW"x RW" with Markov transition density
given by
The Markov chain dlefined by k, which we denote by {pyj, zy}Ro,1 has invariant dlensity
xr(P, z | y) and satisfies the usual regularity conditions. It may seem to the reader more
natural to ulse the Malrkov transition density k (z', p' | z, 4)= p'|zyxz| ,)
for the joint chain. We have discussed this issue in Remark 3. Of course, the marginal
chain {@}Rj"o has the Markov transition density k(P' | P) no matter which version of the
Markov transition density we choose for the joint chain. While this is obvious for the
chain corresponding k, it can be easily shown for k by considering two consecutive steps
of the joint chain. Let k(P'|4) be the Markov transition density of the {@},t"o chain
corresponding k. Suppose, two consecutive steps of the joint chain are (P, z) and (P', z').
Then ,
=. k(p'|z, P)k(z|4)dz
where the conditional densities in the third equality are obtained from the Markov
transition density, k, of the joint chain. The de-initializing arguments of Roberts and
Rosenthal [42] can be used to show that the joint chain {pj,zy},t"o inherits geometric
ergodicity from its marginal chain {pj },to Note that {#y, zy }=,o is the chain that is
actually simulated when the AC algorithm is run (we just ignore the zys).
Suppose we can find a function a : RW x RW" [0, 1], whose expectation with respect
to xr(P, z | y) is strictly positive, and a probability density d(P', z') on RW"x RW" such that for
all (P', z'), (p, z) ERI? x R"n, we have
k (P', z | 4 z) > S (r, z) d(P', z') (3-6)
This is called a minorization condition [22, 30, 43] and it can be used to introduce
regenerations into the Markov chain driven by k. These regenerations are the key to
constructing a simple, consistent estimator of the variance in the CLT. After explaining
exactly how this is done, we will identify s and d for both AC and PX-DA.
Equation (3-6) allows us to rewrite k as the following two-component mixture density
k (p', z' | 4, ) = s (p, z)d(P', z') + (1 a (#, z)) r (P' z' | 4, z) (3-7)
where r is the so-called residual density defined as
k (P',z' | 4, z) Stp,z)d(P', z')
1 s(P, z)
when s(P, z) < 1 (and defined arbitrarily when s(P, z) = 1). Instead of simulating
the Markov chain {pj,zy}Rj"o in the usual way that alternates between draws from
"xz | p, y) and xr(P | z, y), we could simulate the chain using the mixture representation
(3-7) as follows. Suppose the current state is (pj, zy) = (p, z). First, we draw 6j ~
Brcnoulli(s(P, z)). Then? if ri, = 1, wei draw (iy I:, zy) from~ dl, anld if by = 0, we draw~n
(pj 1, zy41) from the residual density. The (random) times at which by = 1 correspond
to regenerations in the sense that the process probabilistically restarts itself at the next
iteration. More specifically, suppose we start by drawing (Po, zo) ~ d. Then every
time by = 1, we have (pj 1, zy 1) ~ d so the process is, in effect, starting over again.
Furthermore, the "tours" taken by the chain in between these embedded regeneration
times are iid, which means that standard iid theory can be used to analyze the .I-i- pind icl~
behavior of ergodic averages, thereby circumventing the difficulties associated with
analyzing averages of dependent random variables. For more details and simple examples,
see Mykland et al. [33] and Hobert et al. [16].
In practice, we can even avoid having to draw from r (which can be problematic)
simply by doing things in a slightly different order. Indeed, given the current state
(pj, zy) = (P, z), we draw (pj 1, zy 1) in the usual way (that is, by drawing from x~z | p, y)
and xr(P | z, y)) after which we "fill in" a value for by by drawing from the conditional
distribution of 6j given (pj, z) and (pj 1, zy 1), which is just a Bernoulli distribution with
success probability given by
We now describe exactly how these supplemental Bernoulli draws are used to construct a
consistent, estima~tor of E [f(p) | y] a~s well as a con~sistentt estim~a~tor of the correspon~ding
.I-imph!lli(c variance.
Suppose the Markov chain is to be run for R regenerations (or tours); that is, we
start by drawing (Po, zo) ~ d and we stop the simulation the Rth time that a by = 1.
Let 0 = -ro < -rTi r~ 72 < R be the (random) regeneration times; that is,
-te = min {j > -re-1 : by _l = 1} for te { 1, 2, .. ., R}. The total length of the simulation, -rR,
is random. Let NI~, N2.., ** R be the (random) lengths of the tours; i.e., Nst = -rt T-rt1
and define
N\ote that the (Ns;, St) pairs are iid. The strongly c~onsisten testimnator of E [ f (P) | y] is
S1
j=0
whee 3= R 2=1 Stan N= 2<1 N. Because the Markov chain driven by k is
geometrically ergodic, the results in Hobert et al. [16] are applicable and imply that, as
long a~s there exists anl au > 0 such that E[| f(P)|2+ | y] < 00, then1
d fr-E~f0)|y N(072 9)
as R c o. (Note that the requirement of a finite 2 + a~ moment is a bit stronger than the
second moment condition discussed earlier.) The main benefit of using regeneration is the
existence of a simple, consistent estimator of y2, Which takes the form
2 i~ t=1St i
Remark 1. The CLT in (3-9) is lkib~l;, different from the CLT discussed earlier, which
takes the form
Hobert et al. [16] explain that the two CLTs are retlated by the equation 2 =" E[s(P,z,) | y]O.2
Remark 2. A further ral.. tr,:y.: of using regeneration to calculate standard errors is that
the starting distribution is prescribed to be d(P,zx) so that burn-in is a non-issue.
We now derive a minorization condition for the AC algorithm using the "distinguished
In .11.1 technique introduced in Mykland et al1. [33]. First, note t~hat k (p', z' | 4,z) does not,
depend on P and as a consequence, neither will our function s. Fix a distinguished point
z, E R"n and let D be an p-dimensional hyper-rectangle defined by D = D x x D,
where Di = [ci, di] and ci < di for all i = 1, 2, .. ,p. Now note that
=s(z) d( ', z')
where
s (z)= E inf and d (P', z') = 1 rx (' | p', y) x (P' | ze, y) ID( ') (3-10)
and
J~~,( RP""( J.vi (l~"~( RnJD
Clearly, d(P', z') is a probability density on RW x RW". All that is required to apply the
regenerative method described above is the ability to draw from the density d (to start
the simulation) and the ability to calculate fl in (3-8). Making a draw from d( ', z') can
be done sequentially by first drawing P' from the truncated density E-1~(P / X* Y) D P)
(which does not require the value of E) and then drawing z' from xr~z' | P', y).
-1X z, I
exp zX(XX-Xz
exp z X(XTX)-] X z
We now provide a closed form expression for s(z), which in turn will give a closed
form expression for the success probability r7 First,
x(4 | z, y) = 1 exp
(2x)9|XTX|-
(z)) .
21
li(z)) XTX (4
where P(z)
S(z)
(XTX)- XTz Thus,
x(P | z, y)
E inf
E inf'
06Dexp (z-,'~x(,) XT z) 0XTX (z,)]
exp'
z' X(X7X)
z,) Xp)
exp c gIi =]
(tii )
(ti) + d Iti p
exp
SzT X(XT X)
'X1X
i~onf exp(z
where tT = (z z,)TX. Therefore, the success probability rl in (3-8) becomes
=~j mf) IDp~l j+1)
s#xD d5 ~, xjy) j1j1 +1 9
= mf ID j+1i
~(~lpj~ex)~pj~ ct Is (@ ) y I
exp -zXXX-Xz
( )x Xp I
ex
- -~~~
j+1)
where ti)T 1 z -z)TX and py 1,4 is the ith element of the vector 4 ~. Note that, rl
depends only on pja1 and zy. Also, note that, E is not required to calculate rl.
Notice that there is a chance for regeneration only when the P component enters the
p-dimensional rectangle D. This so__~-1-;- making D large. However, increasing D too
much will lead to very small values of rl. Hence, there is a trade-off between the size of D
and the magnitude of the success probability, rl.
Modifying a computer program that runs the AC algorithm so that it simulates
the regenerative process is quite simple. Since code for simulating from xr( '|z, y) and
xr~z' | p', y) is already available, it is straightforward to write code to simulate from d. All
that remains is a small amount of code to calculate rl and compare it to a Uniform(0, 1)
after each iteration of the AC algorithm.
Remark 3. A minorization condition can also be obtained for k using the "distinguished
1p~...ni" technique. However, in this case, the z component has to enter an n-dimensional
i ),l
exp -T zf X(TX)~-X zy
exp [i I =) 1 I /(j) ) 1 6"ID j+
r LItri:ll~. before a regeneration is possible. In most applications, a is much Irlary ? than p,
and when n is Iary,l the <.-?.;?7.17~l;, that all n components of z -:::l,,tll~r, .... ;, le; enter their
assigned interval is ini'~..a ll;i so small that the rll' .-r:thm is of no practical use. Moreover,
as mentioned above, this problem cannot be solved .: cl~;, by I,,;;.;. .9l the intervals 'ary ,I .
Regeneration can also be used in conjunction with the PX-DA algorithm. Indeed, we
now show that a simple modification of our minorization condition for the AC algorithm
yields a minorization condition for the PX-DA algorithm. The Markov transition density
of the PX-DA algorithm can be rewritten as
k*(p' | 4)=xp z yhg|zxz| ,y gd
where h(g | z) is the density in (3-5). As before, instead of working directly with k*, we
consider the joint chain on RW"x RW" x RW with Markov transition density given by
k*(p', (z',g') P, (z, g)) = [h(g' | ')(z' | ', y)] x(P |gzy).
Let {#y, (zy, gj)} o denote the Markiov chain corresponding to k*. Since T(4 | y) is the
invariant density for k*(P'|4), we have
=( x)p | gz~~iy)L [hg| i~z|y) g z
and from this it follows that xr(P | z,: y) [h(g | z)xr(z | )] is the invariant density for
{pj, (zy, g))}Ro. As before, considering two consecutive steps of the joint chain, it
can be shown that the marginal chain {pjy}Ro has Markov transition density k*. It is
straightforward to show that the joint chain is Harris ergodic, and, as before, the chain
associated with k* inherits geometric ergodicity from its marginal chain {Pjy}Ro. We can
get a minorization condition for k* as follows
k*(p, (', g) |4, (, g) =[h.(g' |')7i(z' | /j', y)] (p' | gz, y)
cs( Is (t ) did Ig (t )
exp -~ zTX(XX-Xz
s(gjzy) d*(,By 1, (zy 1, gj 1))
.~ x(6gzyy (,Bl |zej, y)
expi gzX (XTX)-1X z p
exp z X (X X)-1X z,
exp g3 zX(X)Xz
=exp= ct *()Ipg (t () +dit*j Ip_ (t*j ) t ,y4ID;j1
j+1>
where t*C F) = (yzi z)7X. TheCoremI 2 states that the .l-i Intl'l' d ic va~rian~ce in? th~e CLTi
for the PX-DA algorithm is no larger than that for the AC algorithm, i.e, O < o-f,k*I
o-f~k < OO for all f e L2 (7(,6 |y)). However, we know from Remark 1 that the regenerative
method is based on a slightly different CLT whose .I-i-inidllicl~ variance has an extra factor
involving the small function, namely E(s() | y), from the minorization condition. Although
the small fumetions in the two minorization conditions that we derived for the AC and
PX-DA algorithms are slightly different, E(s(.) | y) remains exactly the same as shown
>l~ o( in x(,6 |)( gz, y) hg|z)xz|,,y) (6|zyID;
=s(gz)d*(,', (z',g'/)),
where the function s(-) is as defined before and
d* (,', (z', g')) = [h(g' | z')r(z' |,6', y)]x(,6' ze,y/)I7D ;6)
where E is alSo the same as before. Hence, for the PX-DA algorithm the function rl
becomes
i1
below
=~~g sIz~~ | y)z|y)1
i= 1
=~~~2 cos sz) '(I H)z') g"2- e-z' (I-H)z'/2
x exp, z' (I H)z'/2g2)C --n R {} i R {} i
1+ (z') (I H)7( z')l~21J (I~T(1~~X
2(n-2)/2F(n/2) (Z' g" ex (- '(I- )z/g2 /
where z-' = gz. Clonseque~ntly, if E[| f~(P)|2+ < OO FOT Some a? > 0 and if r ,k and
qj,k* denote the variances in the regenerative CLT for the AC: and PX-DA algorithms,
respectively, then 0 < yf~k* I ,k <00. Hence, PX-DA remains more efficient than AC in
the regenerative context.
We end this section with an illustration of our results using van Dyk and Meng's
[2001] lupus data, which consists of triples (yi, Xl, Xi2), i = 1,... ,55, where xsi and Zi2
are covariates indicating the levels of certain antibodies in the ith individual and yi is an
indicator for latent membranous lupus nephritis (1 for presence and 0 for absence). van
Dyk and Meng [51] considered the model
with a flat prior on p. We used a linear program (that is described in the Appendix) to
verify that C'I. in and Shao's [2000] necessary and sufficient conditions for propriety are
satisfied in this case.
In order to implement the regenerative method, we had to choose the distinguished
point z, as well as the sets [ci, di]. We ran the PX-DA algorithm for an initial 20,000
iterations starting from the maximum likelihood estimate of P given by P = (-1.778, 4.374, 2.428).
We took the distinguished point to be the average value of z over this initial run. For
is { 0, 1, 2}, let pi and as denote the sample mean and sample standard deviation of the
#is over this initial run. Wle set De [iA 0.09 si, A+ 0.09 s,]. (The factor 0.09 s, was
chosen by trial and error.)
While generating Gamma and multivariate normal random variables is straightforward,
we need an efficient algorithm for generating truncated normal random variables. We
used the accept-reject algorithm of Robert [39] to generate one-sided truncated normal
random variables. We ran AC and PX-DA for R = 100 regenerations each. This took
1,223,576 iterations for AC and 1,256,677 iterations for PX-DA. We used the simulations
to estimate the posterior expectations of the regression parameters and the results are
shown in Table 3-1. (Results in C'I. i. and Shao [6] imply that there exists a~ > 0 such that
E [|py |2+" ly < OO for je {0(, 1, 2}).) It is strikiing that the estimated .I-i-ing~l .i ic variances
for the AC algorithm are all at least 65 times as '609.-~ as the corresponding values for
the PX-DA algorithm. These estimates -II__- -r that, in this particular example, the AC
algorithm requires about 65 times as many iterations as the PX-DA algorithm to achieve
the same level of precision. (We actually repeated the entire experiment seven times and
th~e estimates of 7 ,k 7 ,k* ran~ged betweenl 40 anld 145.)
Table 3-1. Results based on R = 100 regenerations
AC Algorithm PX-DA Algorithm
Parameter estimnate s.e. (~fi;fk leStimnate s.e. (;jf;k* ,k
Po -3.060 0.097 -3.018 0.012 6;6.6;
P1 7.005 0.190 6.916 0.023 66.9
2a 4.037 0.121 3.982 0.015 63.1
CHAPTER 4
BAYESIAN MULTIVARIATE REGRESSION
4.1 Introduction
Suppose Yi, Y2, are d-dimensional random vectors satisfying the linear
regression model
where p is the k x d matrix of unknown regression coefficients, xi's are k x 1 vectors of
known explanatory variables and we assume that, conditional on the positive definite
matrix E, the d-variate error vectorS E1, ,E are independently and identically
distributed with common density
fH E) = exp E E-1E d1H(5) (4-2)
where H(-) is the distribution function of a non-negative random variable. The density
fH is a multivariate scale mixture of normals and it belongs to the class of elliptically
symmetric distributions. The density fH can be made heavy-tailed by choosing H
appropriately. For example, when H is a Gamma(" ,\ )distribu~tion fulncition, fH becomes
the multivariate Student's t density with v > 0 degrees of freedom i.e., the density of El
becomes proportional to [1 +-FT- v"E 2-6 II !!v of the results in this chapter a~re
specific to the multivariate Student's t regression model.
We can rewrite the regression model in (4-1) as
y = Xp +E
where y = (yl, .., y,)T is the a x d matrix of observations, X = (xl, x2., )T iS the
a x k matrix of covariates and E = (E, .. En) iS the a x d matrix of error variables. The
likelihood function for the regression model in (4-1) is given by
We consider the standard noninformative prior on (P, E), i.e., we assume that xr( E) oc
|E| 2 The posterior density takes the following form
where c2 9) is the marginal density of y given by
d(d+1)
where W CR 2W is the set of d x d positive definite matrices. Fernandez and Steel [12]
proved that c2 9) < OO if and only if a > d + k. In section 4.2, we give an alternative
proof of the posterior propriety. A byproduct of our proof is a method of exact sampling
from xr(P, Ely) in the particular case when n is exactly d + k. Throughout this chapter
we assume that n > d + k. We also assume that the covariate matrix, X, is of full
column rank i.e., r (X) = k. The posterior density in (4-3) is intractable in the sense that
posterior expectations are not available in closed-form. Also, our experience shows that
it is difficult to develop a useful procedure for making i.i.d. draws from xr(P, Ely). In this
chapter we focus on MC1| C methods for exploring the posterior density in (4-3).
We develop a data augmentation (DA) algorithm for xr( Ely) in section 4.3.1. It
has been noticed in the literature that the standard DA algorithm often suffers from slow
convergence [51]. Empirical and theoretical studies have shown that alternative algorithms
that are modified versions of the standard DA algorithm such as the Haar PX-DA
algorithm and the ;;;.,97, l.:1r augmentation algorithm often provide huge improvement over
the standard DA algorithm (Liu and Wu [27], van Dyk and Meng [51], Roy and Hobert
[46], Hobert and Marchev [17]). In section 4.3.2, we develop the Haar PX-DA algorithm
for the posterior density in (4-3).
We then specialize to the case when the errors, Ei S, have a Student's t distribution
i.e., the mixing distribution, H(-), in (4-2) is a Gamma(, ") disotribu~tion fu~nction.W
prove that in this case, under certain conditions, both the DA and the Haar PX-DA
algorithms converge at a geometric rate. As mentioned in Chapter 2, the geometric
ergodicity of a Markov chain guarantees the existence of central limit theorem. Using
results from Hobert and Marchev [17] we also conclude that the Haar PX-DA algorithm is
at least as efficient as the data augmentation algorithm in the sense that the .-i-mptotic
variances in the central theorem under Haar PX-DA algorithm are never larger than those
under the DA algorithm. Some of these results are generalizations of results from van
Dyk and Meng [51] and Marchev and Hobert [28] who considered the special case where
there are no covariates in the regression model (4-1), i.e., X = (1, 1,...,1)T and H is
Gamma(" \ ) distribu~tion fuinction
The rest of this chapter is laid out as follows. In the next section, we prove the
propriety of the posterior distribution. In section 4.3, we describe the DA and the Haar
PX-DA algorithms. In the last section, we compare the two algorithms and prove that
both the algorithms converge at a geometric rate.
4.2 Proof of Posterior Propriety
In this section we address the propriety of the posterior distribution. In particular, we
prove that xr(P, Ely) is a proper density for almost all y if and only if a > d + k.
Theorem 3. Let y be Lebesgue measure on R d". The posterior I1:.:; 0 Hr(P, |y) is
proper for y- almost all y if and only if a > d + k.
Proof. We want to show that
for y- almost all y if and only if a > d + k. Recall that the likelihood function f(y|4, E)
is itself an integral with respect to the distribution function H(-). We now use this fact
to rewrite c2 9) aS aHOther integral which is defined on a larger space and is easier to
handle. In order to simplify notation, we assume that H(-) has a pdf h(-) (with respect
to Lebesgue measure on R,+). We introduce latent data q = (ql, q2, > ) Such that,
conditional on (i, E), { (y;-, qj)) n are iid pairs satisfying the relations
and qj|#,C E h(-).
If we denote the joint conditional density of y and q hy fly, q|/3, E), then by (4-1) and
(4-2) it follows that
I
f(U 9 /S, E)49
f(UO91j,, / )f(4jP9) / E4j
f(U /SE). (4-5)
So we get
JW JR
JW JR L JR" J
R" WRf(y q|/S, E)x(/3, E)d/S dEdq
where the last equality follows by Fubini s theorem. Note that, the integrand in the right
hand side of the above equation is given by
II
f~ ~ ~~~n (y, q|/3 EYj i ) exp
(2xr) z|E|lz =1
2 )=1
I
/(UO/S E)
Let Q be an ax n diagonal matrix with diagonal elements (- -,i~ -). Then,
j=1
j=1
j= 1
j=1
=tr E-p 0 -p p pyQ
(X Q- X)-1XTQ-ly. The following definition of the
Arnold [2, C'!s Ilter 17]
Yj)TC-I(PTXj
where, = (XTQ-1X)-1 and p =
matrix normal distribution is from
Definition 2. [] Suppose a~ is an mxr matrix;, E and T are mxm and rxr non-negative
7. I;,../.: matrices. We .rt;, that Z has a matrix: normal distribution with parameters a~,
and T if Z is an m x r random matrix: having moment generating function given by
and we write Z ~ Nm,r (a, E, T) In this case we have E(Z) = a~. Moreover, if E and T are
positive 7. It...:I~ matrices then Z has the following I. ,.-.:1/ function
1 1
.fz(z) =ep -t{ ( ) ( ) .
(2xr)mr/21 r/2 rm/2 2
Since r(X) = k, it follows that X Q-1X and hence R is a p.d. matrix. Thus,
(2r)9 | |0| 2I2I exp -
(2xr) 2 |E|?
Itr E -y -p || hq)E -
2
d+(n-k)+ 12
(2xr) 2 2 I = ( i
We now assume that n > d + k and will show that c2 9) < Xo. We f Tst show that when
n > d + k, y Q- y <"0- ft is a p.d. matrix with probability one (with respect to the
Lebesgue measure on R~d"). Notice that
yTQ-ly rgS-1q = Y -1Y YT -1X(X Q-1X)-1X Q-1Y
= g~ yi~x Q- I-Q ( X)-X'iX Q-Q- (4-7)
and since I Q-2X(XTQ- X)- X Q-2 is an idenipotent matrix, it implies that
yTQ-ly ,TS2-if is a positive senli-definite matrix. Now we prove that y Q-ly ,rg2-11
is a p.d. matrix by showing that |yTQ-ly 79S-1<| / 0 (with probability one). Let A be
the n x (d + k) augmented matrix (X : y). Then,
A'Q-1 A = Q X
X Q-1X X Q-1Y
y Q-1X Y Q-1Y
Therefore ,
|A'Q- A| = |X Q- X||Y Q-1 y -YTQ- X(XTQ- X) XTQ- Y|
= X Q-1X||y Q-l 79S-11
= |C-ll 7 -l 79S-19|. (4-8)
Since r(X) = k, we know that X Q-1X is a p.d. matrix and hence |0| > 0. Also since
n > d + k, the d + k columns of A are linearly independent with probability one because
the probability of n dimensional random column vectors of y lying in any linear subspace
of R's with dimension I n 1 is zero (with respect to the Lebesgue measure on R~dit
So, A"Q-lA is a p.d. matrix and hence |A"Q-lA| > 0. Then from (4-8) it follows that
|yTQ-ly p'O2-1p| > 0.
To integrate the expression in (4-6) with respect to E, we use the following definition
of Inverse Wishart distribution.
Definition 3. [29, p. 85] Let 8 be u p x p p.d. matrix;. Then for some m > p the p x p
random matrix: W is said to have a Inverse Wishart distribution with parameters m and 8
if the p.d.f of W1 (wi!th, respectt to Lebesgue measure on, E. I, restrictedl to th~e set wrh~ere
W > 0) is given by
m+p 1-1 -1
mp p(p-1) m ]/ .-i)
2 2 x 4 102 __ -,,,
f (W, m, 8)
and we write W ~ IW,(m, 8).
Hence if a '> d + k i.e., a k > d by the above definition of Inverse Wishart
distribution, we have
2~ ) ~ x r 1/ 0( ( k 1-i)) a
whn =d + k
If a=1 d + k using (4-8i) wegl getll \
4-9)
2 4
aca+ |A hq).( -0
I i1 4d~-
JW JR
Since h(-) is a probability density function, it follows that in the case n = d + k,
which, of course, is a finite number. So, we have proved that in the particular case when
n = d + k, the posterior distribution is proper with probability one. Then, an application
of Lemma 2 of Marchev and Hobert [28] shows that for y-almost all y the posterior is
proper for n '> d + k.
We will now show that the posterior distribution is improper when n < d + k. Let
L = yTQ-ly p'O2-1p. It is easy to see that I Q-2X(XTQ-1X)-1XTQ-2 is an
idemlpotent matrix wsithl tr I 1 -X(XT-1'X)-~1XTQ- = n- k.- We also kllno that
rankly) = d with probability one. Hence, from (4-7) it follows that if a < d + k, then L
is a p.s.d. matrix i.e., there exists a vector xo(f 0) such that Exo = 0. We use this fact to
show that the integral in (4-9) diverges when n < d + k.
Since E is a symmetric matrix, the Jacobian of the transformation Z = E-l is given
by J = |Z|-(d ) [29, p. 36]. Therefore, we have
|E|exp tr(E 2)d Z exp -trZ) Z
w2 2
The matrix Z is p.d. So, by Clo d.1 -l:y decomposition, Z can be uniquely represented as
Z = LLT, where L is a lower triangular matrix (1.t.m.) with positive diagonal elements.
Th~e Ja~cobian of th~e corresponding transformationl is given by | J| = 2" nl0 1 -i+ 1[29, p.
36]. Let L = (li, 12 d). Therefore,
|f, Z| exp tr (Z) dZ= 2 | Lrn-kd-1ep -t L 1-+d
1i 1
= 2 |Lt-kd-exp -tr 1 -+L
i=1i= 1
=~~~~~ 2 x 101 1-kiL
ad~ ~ i 1 --dl~ i=1 1=
where
U = { (Un1, U21,, i2, L, Udd) E H2 ii > 0, Vi = 1, d}.
Notice that the columns of L form a basis of R d. So there exists constants bl, b2,..., bd
with b, j 0 for some i such that xo = 2, bil Suppose i'= min {ie {1, 2, .. ., d} : bi j
0}. Now consider the transformation L = (li, 12 d) O = (ox, 02, Od) Where
oi = 14 V i / i' and oi/ = zo i.e., O = LA, where
1 0 --- 0 --- 0
0 1 --- 0 --- 0
A=
0 0 --- bi/ --- 0
0 0 --- bd --- 1
Then the Jacobian of the transformation is | |A| | = | bi |-d [29, p. 36]. Note that
14, = of, -e D bo and in particular lits, = oi/i/ because oi/ = 0 for i > i'. Since
foi/ = 0 we have
d d
exp i1 01 1 -k-idL
ifi 1" i= 1
exp o s bo r- iso -d
i/i i' i>i/ i>i' i=1
exp o foT + 1c o Fo1 oi O-k-idO
vi/ ii/ i i=1
where
V = {(v1 v21, U22, 31, Udd) ER : 2 > 0, Vi / i' and A~ I > 0}.
By Fubini's theorem we can rearrange the order of integration. Notice that oi/ does not
appear in the exponential term and the only term involving oi/i/ in the above integral is
oppk-i'~. Hence the above integral dliverges since
oppk-[ i R i'~i')I(bs >o 0) +Is os,)rIcbs, 0u) does, = 0
So, we have now proved that the posterior is proper for y- almost all y if and only if
n~d+k. O
As we mentioned in the introduction that Fernandez and Steel [12] gave a proof
of propriety of the posterior density xr(P, Ely). A byproduct of our alternative proof is
a method of exact sampling from xr(P, Ely) in the particular case when n is d + k. We
describe the method now.
Let xr(q, p, Ely) be the joint posterior density of (q, p, E) given by
Then from (4-5) it is easy to see that
/( "(q, P, Ely)dq = xr( Ely). (4-11)
So iid draws from xr( Ely) can be obtained by making i.i.d draws from xr(q, P, Ery) (and
then just ignore the q component). Draws from xr(q, p, Ely) can be made sequentially via
the following decomposition
In the special case when n = d + k, from (4-10) we know that xr(qly) = nL, h(qi). So, in
this case an exact draw from xr( Ely) can be made using the following three steps:
(i) Draw 41,q2,...q,, independently where qi ~ h(qi).
(ii) Draw E ~ IWd[n k, (yTQ-ly yTQ-1X(XTQ-1X)-1XTQ-ly)- ]
(iii) Draw PT ~ N~d~k NT -1X(XTQ-1X)-1, E, (XTQ-1X)-1)
Standard statistical packages like R (R Development Core Team [36]) have functions for
generating random matrices from the Inverse Wishart distribution. One way to generate
Z ~ Nm,r(p-, E, T) is to first independently draw Ze ,pT hr steihrwo
p- for i = 1,..., m. Then take
ZT
ZT
Z= E2
ZTn
4.3 The Algorithms
In this section, we develop the DA and the Haar PX-DA algorithms for the posterior
density (4-3). We first develop the DA algorithm in Section 4.3.1 using the latent data
q = (qi, q2,., q n) and the joint posterior density of xr(q, p, Ely). We then derive the Haar
PX-DA algorithm in section 4.3.2. In the special case, when the observations, yi's, are
assumed to be from a multivariate Student's t location-scale model, van Dyk and Meng
[51] developed the marginal augmentation algorithm, which is a modified version of the
standard data augmentation algorithm, for the density (4-3). Hobert and Marchev [17]
have shown that when the group structure exploited by Liu and Wu [27] exists, marginal
augmentation algorithm (with left-Haar measure for the working prior) is exactly same as
the Liu and Wu's [1999] Haar PX-DA algorithm. In section 4.3.2 we show that a similar
group structure can be established for analyzing the posterior density in (4-3) and so
marginal augmentation algorithm is the same as the Haar PX-DA algorithm in our case.
4.3.1 Data Augmentation
We now describe the basic data augmentation (DA) algorithm. The DA algorithm
simulates a Markov chain with Markov transition density
where xr(P, E1q, y) and xr(q|4, E, y) are the conditional densities obtained from the joint
density xr(q, p, Ely).
Note that, k(P, E|4', E')xr(P', E'|y) = k(P', E'|4, E)xr(P, Ely) for all (p, E), (P', E') E
E"'' x W, i.e., k(P, E|4', E') is reversible with respect to xr(P, Ely). It then immediately
follows that xr(P, Ely) is the invariant density for the DA algorithm, i.e.,
Since the Markov transition density, k(P, E|4', E') is strictly positive (with respect to the
Lebesgue measure on E"'' x W), using similar arguments (that we used to show the AC
algorithm is Harris ergodic) as in Chapter 3, it can be shown that the DA algorithm is
Harris ergfodic. Then from the discussion in C'!s Ilter 2, it follows that the ergfodic averages
based on the DA algorithm can be used to estimate posterior expectations.
A single iteration of the data augmentation algorithm first uses the current state
(P', E') to generate q from xr(q|4', E', y) and then draws the new state (p, E) from
xr(P, Elq, y). Simulating from xr(P, clq, y) can be done sequentially by first drawing
E from xr(E|q, y) and then drawing P from x ( | E, q, y). From section 4.2, we know
that conditionally P|E, q, y follows a Matrix Normal distribution and the conditional
distribution of Elq, y is an Inverse Wishart distribution. Conditional on (P, E, y) qi's are
independent with
gi|#F, E, y in hfin x (x -y)E-(x -y
In the particular case when h(-) is Gamma (" ) ine., when yi, y2, .. "r i" i nrTOrS~umed to
be observations from Multivariate Student's t regression, we get
ind v + d v + (P Xi yi)TE-1(PTxi yi)
2' 2
4.3.2 Haar PX-DA Algorithm
In this section we derive the Haar PX-DA algorithm using the left Haar measure as
described in Hobert and Marchev's [2008] Section 4.3. From Section 4.2, we know that
"(q y) c |X Q-] X-I |y Q- y- QT- X(X Q--X- X)- _, X Q y qhi)
i= 1
As in chapter 3, let G be the multiplicative group R,+ where group composition is defined
as multiplication. Again, the left Haar measure on G is vl(dg) = dg/g where dg denotes
then Lebesgue m measure on Rm .Lt G1 act on R" through component wise multiplication,
i.e. gq = (ggi, ,.i.; ). Then it is easy to see that Lebesgue, measures on Rn is relatively
left invariant with multiplier X(g) = g". In order to construct the Haar algorithm, we need
to verify that m(q) = faxr(.i;|
given y7 can also be written as
d n-k d
(ur~uXTQ-1X yTQ-ly y Q-1X(XTQ-1X) X Q-1 y ii2 4i4 q4hB
Therefore it follows that xr(l-i.;|<) = c3 =1, */*]~ ) Where c3 1S a COnStant that does not
depend on g. Hence,
m~~~q)~ = 3O )O = C3
i= 1 i= 1
In1 order to show that n(ql) < co, sup~pose~ x ~ -h( ); x > 0 and consider th~e standard
noninformative prior, 1/o-, for the scale parameter, o-. Then the corresponding posterior
distribution is proper since
/1 x da 1 "x
-h()hyy(y=-
o o- o- o- xo
10
Then by Lemma 2 of Marchev and Hobert [28I it follows" thatI fo =1o~)$<0 for~ ~ ~ ~ ~ ~ ~ ~~~~'\ al o x,-- .> n n>1snc ,, a be viewed as the posterior density of the scale parameter, o-, when the standard noninformative prior, 1/o-, is combined with the likelihood function of a iid observations,xl, x2., *,n, from the scale family -h( ). He~nc~e, it follows that mr(q) < 00. Consider the following univariate dlensity on RW_ e,(g) = lit) m(q) i= 1 Then one iteration of the Haar PX-DA algorithm, which is a modified version of the DA algorithm, consists of the following three steps: (i) Draw q ~ xr(q|S', E', y) (ii) Draw g ~ e,(g) and set q' = gq (iii) Draw p, E ~ xr(P, E1q', y) The Markov transition density of the PX-DA algorithm can be written as where R(q, dq') is the Markov transition density induced by the step 2, that takes q q' = gq. In the special case when we have multivariate Student's t data, it is easy to see that the density e,(g) is Gamma (n f-). So, in this particular case, the step 2 of the Haar PX-DA algorithm is simply a draw from a Gamma distribution. Our results in the next section are specific to multivariate Student's t regression. 4.4 Geometric Ergodicity of the Algorithms In this section we prove the geometric ergodicity of the DA and Haar PX-DA algorithms. In C'!s Ilter 2, we mentioned that the geometric ergodicity of a Markov chain can be proved by establishing a drift condition and an associated minorization condition. In this section, we show that these conditions can be established for the DA algforithm and the Haar PX-DA algforithm. We now prove used to establish the drift condition. Lemma 1. Sup~pose that P is a p.d. matrix: and that P - vector x. Then, the following lemma that will be XXT is a p.s.d. matrix: for some X"P- x < 1. Proof. Consider the matrix Calculating the determinant of the above matrix twice using the Schur complement of P and 1, we get the following identity |P|(1 x P- X) = |P xx |, i.e., |P XXT | x P- X = 1- |P| Since P is a p.d. matrix, |P| > 0. Similarly, since P-XXT is a p.s.d. matrix, |P--xxT| > 0. Then from the above identity it follows that X"P-1X < 1. O The following lemma establishes the drift inequality for the DA algorithm. Lemma 2. Let V(P, E) = CE (yi PTXi)TE-l(yi PTX,). Then for the DA dy..~ rithm we have n+d-k n+d-k E(V(P, E)| I', E') < V(P', E') + v.. v+d-2 v+d-2 Proof. Recall that the Markov transition density of the DA algorithm is JRY ~(p, clp', C') So we have E [E {V~(P, E)|q4,y}|O'I, E', y] G(V(P, C)IP', C') (4-12) To calculate the above conditional expectations we need the corresponding conditional distributions. The required conditional distributions, as derived in the previous sections, are the following PT | q, y"1dr(-T ~, Nd E~q y~ Ie "Q-y p and ind (V+dv+(Ti yi T 1(P~ T i q4|#, E, y ~0 =12.. 2 2 Starting with the innermost expectation, we have E(V(P, E)|E,q, y) i= 1 i=1i i=: 1i i=- 1~ 1 i= 1 To/ cacuat th bv xetton eueteflo in prpryfmtixnra distribution.i Wk o -1pul T), Wh aoere by V~ W,(,9, we meahefllwng thapety V f h as x dmnsonal noncentral Wishart distribution with m degrees of freedom, covariance matrix W and with noncentrality parameter 6 [2, chap 17]. In this case, E(V) = mW + 5. If 6 = 0, we wi that V has a central Wishart distribution and we write V ~ W,(m, 9). So, we have E [E {E(V(p, E)|E, q, y)|q, y) |', E', y] . E [ E- |E, q, y] dR + pEI-1p-1 and hence, E(V(P, E)|E, q, ) =yTC yi i= 1 i= 1 Ij-~ p x4+ x[da + pE- p ] x4 i= 1 i= 1 i= 1 p 1Xi) E 1(y To calculate the second level of expectation in (4-12), we use the fact that if X~ IW,(m, 8), then X-l ~ W,(m, 8) and E(X-1) mO [29, p. 85]. Thus, E [E(V(P, E)|E,q, y)|Iq, y i= 1 p- xi)+di X 2Xi i= 1 p x4)T E(E-ly |~) (y n l xSi) + dC xf axs. i= 1 (4-13) p 1x ) (y Q- y p'O- p) (y Now, yTQ ly-p p ~T yTQ y p p S-1 2-T2pl p1 q ysy + p (X Q- X)p 2p 1X Q ly n i= 1 n i= 1 a~ ( q ys- p x )(ys i= 1 p 1X ) . (yi p 1X ) (y Qly pl p) ( p x j=1 (Yi I-1TXi) ( j= 1 4i 4i (ys p x ). T p xy(yy- pxj) Since we assume that n '> d + k, from Section 4.2, we know that yTQ-ly p'O2- p is poitve efiit matrix,,c:, withcl proablity:l c 1. So pn (y~ p xj)(yj p'xj) = yi Q-' y)-p 0- p11 1 is a p~d. matrix w-ith probability 1. Also, it is straigh~tforward to see is a positive semi definite matrix. So, an application of Lemma 1 yields (yi p Txi) (y Q-1 y p 0- )- (yi p x ) I -. (4-14) Since we assume that r(X) = k, it implies that X Q- X is a p.d. matrix and obviously xy/ x.:IT is a p~s~d matr~ix. So ano0ther7 application? of ~Lemma. 1 give~s xfaxe8 = x (X QolX- X) x j=1 < (4-15) Therefore, we get E(V( n, E)|q, y) < (n + d k) 1 i= 1 Now, recall that ind 1 V + d v + (Txi yi)TE-1(PT4 x -i ys) , q4|#, E, y ~ 0 =12.. 2 2 So, using the fact that if w: ~ Gamma(a, b) then E( ) = finally, we havei E(V(P, E)|#', E') 5(s-# i E)-(s-# i i= 1 which proves the lemma. O The following lemma establishes an associated minorization condition. v+ (p' x, yi) (E')- ( Xi -- yi)\ v + (P' xi yi) (E')- ( X, yi) Lemma 3. Fix: 1 > 0 and let S {((, E) : V(P, E) < 1}. Then the M~arkov transition 1, .:1;i of the DA rlly.>rithm k(P, E|4', E') .el.:. the following minorization condition k(P, E|4', E') > ed(P, E) V(P', E') ES where the 7. ;. -.1 / d (P, E) is given by at~ So gtd9(i ) d r and e = ( fo g(t)dt)". The function g(-) is given by i/t (v+d v vd v+1 9() F 2 2' (,* whAere q* = iSlogl 1+ andr I(a~blx) denotes the Gamlrmajab) I r.' .1/ evaluated at the point x. Proof. For i = 1, 2,. ., n, define p'x,) (E)- (ys PTXe) < 1}~ Si = {((, E) : (ys Clearly, S C Si for i 1, 2,. ., n. Recall that, xr(q|0, E, y) is product of a Gamma densities. So, Yi ) (E')- ( X, mnf r(q|S',E', y) (P',C')ES Then by Hobert's [2001] Lemma 1, it straightforwardly follows that (q|O', E', y) > gii)ca Vci', E') t S. i= 1 V + (P' Xi .P,')~= v + d i= 1 i=1 (P',C')ES 2 ' Hence, the proof follows because Fr-om results stated in chapter 2 it then follows that the DA algorithm is geometrically ergodic as long as Ithe coefficient of V(B', E') in Lemmnra 2, i.e., "- is strictlly less than 1. We state this in the following theorem. Theorem 4. The DA ,Ily.>rithmn is ~i..- tre. I :. .rli ergodic if 0 < < 1 i.ec., a2 < v+k-2. Hobert and Marchev's [2008] Proposition 6 shows that the Haar PX-DA algorithm is at least as efficient (in efficiency ordering) as the DA algorithm. Using similar arguments as in Corollary 1, we can show that geometric ergodicity of the DA algorithm implies that of the Haar PX-DA algorithm. Hence we have the following corollary. Corollary 2. The Haar PX-DA rlly. rithm is at least as efficient as the DA rlly.>rithm. Also, the Haar PX-DA rlly.>rithm is geomen,... ril;i ergodic if a < v + k 2. Remark 4. Our result in Lemma 2 matches with M~archev and Hobert's [2004]l Theorem 1' in the case when k = 1. We also can prove the geometric ergodicity of the Haar PX-DA algorithm by directly establishing a drift and minorization condition for it. We actually can use the same drift function V(P, E) = CE (yi PTXi)TE-l(yi PTX,) to establish a drift condition for the Haar PX-DA algorithm. Lemma 4. Let V(P, E) = CE =(yi PTXi)TE-l(yi PTX,). Then for the Haar PX-DA rlhi ~.:athm we have (n +d k)(v +d)(n 1) av(n +d k) n(n 1)(n + d k)v(v + d) E ( V (, E) | ', E')I (nv 2)(v +d 2) nv 2 (nv 2)(v + d 2) Proof. Recall that the Markov transition density of the Haar PX-DA algorithm is - yi) E-1(P X, - So we have V(#, E)k(p, E|4', E')dpdE JW JR E [E { V(#, E)|q, y) | ', E', y] E(V(P, E)|# ', E') From Section 4.3.2 we know that q' gq where giq ~ Gamma (n ~-). We substitute q by q' in (4-13) and then straightforward algebra shows that (vi p xi) + xOx i= 1 n k) 9 p 1Xi) (y Qly pl p) (y E(V(P, E)|q1, y) E(V(P, E)|q, y) = (n k) (ys l i = 1 i i= 1 p'1Xi) (YTQ- y p'1Xi) (YTQ- y p'O- p)- (y p'O- p)- (y p- xs) + di x Ox i= 1 From (4-14) and (4-15) it then follows that v~n d -k) "1 nu 2 q4 i= 1 E(V(P, E)|q, y) v(n +d k) g av 2 q::4j~ i= 1 j i Since conditional on (P, E, y), qi's are independently distributed with in s v +d V + (PTxi q4|#, E, y ~ 1,2,...,n, we have E(V(P, E)|4', E') v(nn +- d k) n+ gy j~~--> i= 1 j i v(n + d k) [ v + -vd V + (P'TXi - av-2 + 2 i=1si v~ V+ (Pl'z Yi )T(E')-1(P'TXs -yj)TE'- (P'Txy -yj>Y) E q, y yi>)2 v(n + d k) I (n 1) (v + d) (' )CI-PTX nu 2 v(V+ d 2) - which proves the lemma. O Our minorization condition for the DA algorithm straightforwardly generalizes to a minorization condition for the PX-DA algorithm. Lemma 5. The M~arkov transition I7.* .:1/ of the Haar PX-DA rlly..,7:thm k(P, E|4', ') al. J. the following minorization condition where the 7. ,: -.1 / d (P, E) is given by and e, S and g () are as .I~ .,: .1 in lemma S. Together, Lemmas 4 and 5 prove the following theorem. Theorem 5.'lt The Har PX-DA rly. rithmr is geomenl~:...rl c/ ergodi if 0 < (n~'d-k vd n1 As a corollary of Theorem 4 (see Corollary 2) we know that the PX-DA algorithm is geomnetric~ally erg~odic if < 1. At first it might appear that Theoremr 5 is a better result than Corollary 2. But, we now show that it can never happen that both of the followingf inequalities hold together n + d-k (n d -k)(v d)(n -1) > 1 < 1. (4-16) v+ d- (nv -2)(v d -2) N\ote that (n d-)(v'd)(n-) n '+d-k if and only if (" ")( < 1 i.e. nr < 1 + "-2. So, if (nv- 2)(v+d- 2) v+d-2 (nv-2) d (4-16) holds then the following must holds n+d-k v-2 > 1; ,a< 1 (4-17) v+d-2 d which, in particular, we~ that v has to be '?N r;i than 2. Now (4-17) holds if and only if v-2 n~v+k-2andn<1+ But, clearly, the above two inequalities can't hold together. We, of course, would like to be able to wi that the DA algorithm ( or the Haar PX-DA algorithm) is geometrically ergodic for all (d, k, v, n)-tuples such that d > 1, k > 1, v > 0 and a > d + k. Note that the conditions in Theorem 4 and Theorem 5 are upshots of the particular drift function V(P, E) and the inequalities (4-14) and (4-15) that we used to prove the drift conditions. In our opinion, to make substantial improvement of Theorem 4 and Theorem 5, we either have to consider a different drift function or resort to a altogether different technique of proving geometric ergodicity other than establishing drift and minorization condition. Either of these two would require us to start from scratch. CHAPTER 5 SPECTRAL THEOREM AND ORDERING OF MARK(OV CHAINS This chapter is divided into two sections. In the first section we present a brief account of operator theory on Hilbert space. In particular, we define the spectral theorem for a bounded normal linear operator on Hilbert space. In the second section we show how these results from functional analysis are used in studying the theory of Markov chains. 5.1 Spectral Theory for Normal Operators We begin with the definition of a Hilbert space. Definition 4. A complex vector space H is called pair of vectors y and b in H is associatedd a complex number (g, h). called the inner product of g and b. .such that the followings hold: (g, h) =(h, g) (g + h, k) = (g, k) + (h, k) if h, k E H (a~g, h) = co(g, h) if 9, h EH and n~EC@ (h, h) > 0 for <<11 h EH and (h, h) = 0 only if h = 0. It can be .;-.le;, shown that the above inner product induces a norm on H I. I;,.. l by | |h | = (h, h) E If the r. tel. e:11 normed space is complete. it is called a Hilbert space. Let H he a Hilbert space over C and T : H H he a linear transformation, called a linear operator. The operator T is said to be bounded if there exists an At > 0 such that ||Th|| < Af||h|| for all h e H, where the norm, || ||, is as defined above. Let B(H) be the collection of all linear, bounded operators from H into H. For Te E (H), define the operator norm of T by i ~i||Th||ll htlhO It is easy to see that ||T|| = sup{||Th|| : h e H, ||h|| < 1} = sup{||Th|| : h e H, ||h|| = 1}. It can he shown that B(H) with the above norm is a Banach space (i.e., it is complete). If S, Te E (H), the composite operator ST E B(H) is defined by (ST)(h) = S(T(h)) h e H. In particular, powers of Te E (H) can he defined as To = I, the identity operator on H and T's = T,-1, for n = 1, 2, 3, .. We can easily verify the inequality || ST|| < || S|| || T|| Now, we define the adjoint of an operator. Let Te E (H). Then using Riesz representation theorem it can he shown that there exists a unique T* E B(H) for which (Ty, h) = (g, T*h) for all g, h e H. The operator T* is called the adjoint of T [47, p. 311]. Some properties of adjoint operators are listed below: ||T*|| = ||T|| T** =T ||T*T|| = ||T||2 and if S also belongs to B(H) then (ST)* = T*S*. We now can define different types of operators. An operator Te E (H) is said to be normal if TT* = T*T, self-adjoint if T* = T, unitary if T*T = I = TT*, projection if T2 =T Clearly, self-adjoint and unitary operators are normal. Note that, if T is self-adjoint then (Th, h) = (h, Th) for all h e H. But, from Definition 4 we know that (h, Th) = (Th, h). So, for self-adjoint operator T, (Th, h) is real for all h e H. We now state a useful uniqueness theorem [47, p.310]. Theorem 6. If Te E (H) and if (Th, h) = 0 for every h E H. then T is the zero-operator. Remark 5. The above theorem would fail if the .scatler field were RW instead of C. For exravple. consider the mardrixr T = < t s As a corollary of the above theorem we get an alternative definition for a normal operator. Corollary 3. An operator T on H is a normal operator if and only if ||T*h|| = ||Th|| for every h E H. Proof. Suppose that || T*h || = || Th || for every h E H. Then || T*h1 || 2 h12 and (T*h, T*h) = (Th, Th). This yields (TT*h, h) = (T*Th, h) i.e., ((TT* T*T)h, h) all b in H. By the above theorem we get TT* = T*T, i.e., T is normal. Conversely, if T is known to be normal then we can write ((TT* T*T)h, h) all b and now following the above reasoning in the reverse direction we get ||T*h|| for every h E H. so = for 0 for = |Th|| O Suppose Te E (H). The operator T is invertible if there is an S E B(H) such that TS = I = ST. The operator S is called the inverse of T and we write S = T l. Suppose R(T) denotes the range of T, i.e., R(T) = { Th : h e H}. Then T is invertible if and only if R(T) = H and T is one-to-one. The spectrum, o-(T), of the operator Te E (H) is defined as follows o-(T) = {A : T AI is not invertible}. Thus Ae o (T) if and only if at least one of the following two statements is true (i) The range of T AI is not all of H i.e., T AI is not onto. (ii) The operator T AI is not one-to-one. If (ii) holds, A is said to be an 1.:I ,: al;,.- of T and in this case there exists & / 0 such that Th = Ah. We call h an eigenvector corresponding to the eigenvalue A. It is difficult to make a general statement about the spectrum of an operator. We define compact operator later in this section. We will see that we can ;?, quite a lot about the spectrum of a compact operator. The complement of o-(T) is called the resolvent set of T and is denoted by p(T), i.e., p(T) = C \o-(T). The proof of the following theorem can be found in text books on functional analysis [see e.g. 9, p. 83]. Theorem 7. If ||T || < 1, then (I T) is an invertible operator. Furthermore, n= o Corollary 4. Let A be a complex; number such that ||T || < |A|. Then from the above theorem it follows that T AI is an invertible operator and its inverse is noi Corollary 5. Suppose that T is an invertible operator and S is another bounded linear operator such that ||T S|| < ||T-1|| ' then S is an invertible operator. Proof. We have || T- S I = IT-(T S)I < IT- IIT SI < 1 by our hypothesis. Thus, by above theorem, I (I T-1S) = T-1S is an invertible operator. Let us denote the operator T-1S by D. Then S = TD and so D- T-l is the inverse of S showing that S is an invertible operator. O Remark 6. Cor .ll.r,, a tells us that a(T) is a bounded set for r,.;; bounded linear operator T because a(T) C {A E C: |A| < ||T||}. Theorem 8. For r ;, bounded linear operator T the set a(T) is a closed, bounded subset of C. Proof. We already know that a(T) is a bounded set. We only need to show that a(T) is a closed set. We will show that p(T) is an open set. Let A be any point in p(T), i.e, (T AI)-l is a bounded linear operator. Choose p such that |p 1- A| < ||(T AI)- ||-l and let T' = T AI, T" = T pl. Now T' is invertible and From Corollary 2, it follows that T" is an invertible operator. Thus, p E p(T) and in particular it follows that i.e., any A s p(T) has a neighborhood contained in p(T). Hence, p(T) is an open set. O It can also be proved that for Te E (H), H / {0}, the spectrum of T, o-(T), is not empty [47, p.253]. The spectral radius, rT, of T is defined as rr, = sup{|A| : A E o-(T)}. So, rT is the radius of the smallest disc (with center at 0) containing the spectrum of T. From Remark 6, we see that rT < ||T||. We now state the spectral radius theorem. Theorem 9. (Sp~ectral Radius Theorem) If Te E(H). the spectral radius. rT. of T r, = lim || T'sl || in f ||T'j The operator norm of a normal operator is same as its spectral radius. We state this as the following theorem. Theorem 10. Let Te E (H) be a normal operator. Then rT = ||T||. If T is self-adjoint then a(T) is a non-empty compact subset of RW and is therefore contained in some smallest closed interval, [nz(T), Af(T)], of RW. Also, ni(T) and Af(T) can he expressed in terms of the inner product on the Hilbert space. These along with some other properties of self-adjoint operators are stated in the following theorem Theorem 11. Let Te E (H) be closed interwel containing a(T). Then (i) nz(T) = inf (Th, h) (ii) Af (T) = sup (Th, h) and (iii) ||T|| = .sup |(Th, h)| = mesxr(|m(T)|, |AF(T)|) We define an operator, T, to be positive if (Th, h) > 0 for all h eH and we write T > 0. An operator T is positive if and only if T is self adjoint and a(T) c [0, 00) [47, p. :330]. It is easy to see that both TT* and T*T are positive operators for any Te E (H). If S, Te E (H) are two self-adjoint operators, we ;?-, S > T if and only if S T > 0. If we consider the scalar field to be RW instead of C, there may exist a positive operator T that is not self-adjoint. The following theorem can he found in [47] p. :331 [also see :37]. Theorem 12. Every positive operator T E B(H) has a unique positive square root S .such that S2 = T (184 U)C Urite S = Ti. If T is invertible. .so i~s S. Now, we describe the spectral theorem for normal operators which defines the operator f(T) where f is a bounded Borel measurable function defined on the spectrum, o-(T), of a bounded normal operator T. A nice exposition on spectral theory is given in Devito [9]. He starts with deriving the spectral theorem for operators on finite dimensional vector space in chapter 5 and ends in chapter 7 with spectral theorem for unbounded self-adjoint operator. There are various approaches to derive the spectral theorem for a bounded, normal operator. In this chapter we define spectral theorem following Rudin [47]. First, we define resolution of the .:ll. ,1.:1; Definition 5. Let B(R) be 8 : B(n) i (H) with the following properties: (i) (4) = 0. the zero operator. 8 (R) = I (ii) each 8(w) is (iii) S( (n w') = S (w) S(w') (iv) If w n w' = 4 then S( (U e') = S (w) + S (w') (v) for every g, h E H. the .set function. Seas. I' It...lI by is a complex measure on B(R). Since each S(w) is a self-adjoint projection, we have Simz( o) = (S(Lo)h, h) = ||(W~h||2 for all h E H. So from (v) it follows that Simit is a positive measure on B(R) for each hE H. Theorem 13. (Sp~ectral Theorem) Let T ES (H) be a normal operator. Then there exists a unique resolution of the .:1. ,:./:7;, 8, on B(a(T)) such that (Tg, h) =m AS,,(dA) for all g,h h H L/et f be a bounded Borel function on a(T). The operator f(T) is I. I;,.. l to be the unique operator ;oril r;.,ii l The above theorem justifies the following notations and f (T) = f I(A)Sl~dA) We refer to the above 8 as the spectral decomposition of T and we denote it by Sr. We now give an example. We know that any linear operator, T, on a finite dimensional vector space, V, can be represented as a matrix [9, p. 157]. We define T to be a diagonalizable operator if its matrix representation is a diagonalizable matrix. From [9, p. 167] we know that if T is a normal operator on V, then T is diagfonalizable. In particular, if we take V to be RW", then T can be represented by n x n diagonalizable matrices. Assume that all the eigenvalues of T are distinct and let Ai, A2, *, abe the distinct eigenvalues of T with corresponding eigenvectors P1, P2,..., P,. Hence, a(T) = {Az, a2, *, n} and B(a(T)) is the powe~rset, of e(T(.). For Ae B t(c(T)), we define Sr(Al) = i:AgEA Piri'. If A = weV define Sr(#) = 0, the zero operator. Since the eigenvectors Pi's are mutually orthogonal [9, p. 157], it is easy to see that Er(-) satisfies all the properties stated in Definition 5. Also note that, i= 1 1\ore generally, if T is a normal operator on a finite dimensional Hilbert space then Sr(A) is the orthogonal projection onto union of all eigenspaces for {Ag}, where As'sS are eigenvalues of T that are contained in ,4. The operator f(T) in Theorem 13 is of course a bounded operator and it can he proved that || f(T)|| < sup{| f ()| : Ae E (T)}. We also have the following theorem which determines the spectrum of the operator f(T). Theorem 14. Let Te E (H) be a normal operator and let f : a(T) C be continuous. Then f (T)* = f(T) || f(T)|| = mp{| f (A)| : Ae E (T)}. As a corollary of the above theorem we get the following useful result. Corollary 6. Let Te E (H) be a normal operator. Then |I ||= ||T ||' for every positive integer n. Now we introduce an important class of linear operators called compact operators. Recall that a subset ,4 of a Banach space is compact if every sequence {a,z} in 4 has a subsequence converging to some element of A. Definition 6. We n;, Te E (H) is compact ifT maps the unit ball. {h eH : ||h|| < 1} into a .set in H whose closure is compact. Retherford [37] gives a nice, concise description of compact operators. For a compact operator T, the spectrum, a(T), is at most countable, and has at most one limit point, namely, 0. Also, any non-zero value in the spectrum is necessarily an eigenvalue of T. We state these results in the following theorem ([8] p. 214, [9] chapter 5 ). Theorem 15. Let Te E (H) be a non-zero, compact, self adjoint operator. Let {A,} be the distinct, non-zero .:I c. ,:;rle;, of T arr rtg J:I~~ in such a way that I11 > I12 > i jfOri )j. and let {h,} be the corresponding sequence of orthonormal eigenvectors. If the sequence does not terminate, then lim |A,| = 0. Moreover, for h E H, ntOO n= 1 L/et G, be the eigenspace of As, i.e., G, is the closed linear span of all the As 's in {hm} having As as ~. .9. :;...le;, Then each G, has finite dimension and its dimension is the number of times An is repeated. F.:,..elle; the spectrum of T, o-(T), is ,.;.-l;; {A,} together with {0}. Some useful properties of compact operators are listed below: T is compact if and only if T* is compact. If T is compact and S E B(H) then TS and ST are compact operators. T is compact if and only if T*T is compact. T is compact if and only if TT* is compact. Every linear operator on a finite dimensional vector space is a compact operator and its spectrum coincides with the set of eigfenvalues of the operator. Another example of a compact operator is the Hilbert-Schmidt integral operator ([47] p. 112, [8] p. 267 ). Let (R, B(R), p) be a measure space. Suppose, t : O x R C be such that |tI (x, y) |2iilly< L Then the associated Hilbert-Schmidt integral operator is the operator T : L2( @ L2(2; 0,) giVen by (T f )(x) = tx )f()d~) 5.2 Application of Spectral Theory to Markov Chains Recall from C'!s Ilter 2 that P(:r, dy) is a Markov transition function on (X, B(X)) with invariant probability, measure, x Lt L (x) be the vector space of real-valued, measurable, mean-zero functions on X that are square-integrable with respect to xr, i.e., Ifw defin an;, inne produc~,,t on~2, L, (x y (,g) = x f(:r)g(:r)iT(dX) then L2, (x i Hilbert space with its norm given by || f|| = ( f, f) The Markov transition function, P(:r, dy) defines an operator, P, on L (xr). The operator P takes each f eL (xr) to the function, P~l fa\ (eL (x)), deindasfllows, By Jensen's inequality it follows that ||P|| < 1. So P is a bounded linear operator on L (x). The vector space L2- (x) is a subspa Ce of2(x) and' L (x) is the- spac tha~t is orthogonal to constant functions. We later describe the reason why we consider P to be an operator only on L2, (x)intead of the whole,, spac 2(). We would also like to mention that, unlike the previous section, here we consider the scalar field to be RW because in statistics we are mostly interested in real valued functions. In this case, the inner product is symmetric in its arguments i.e., ( f, g) = (g, f). Fr-om C'!s Ilter 2, we know that the Markov chain A (and so the transition function P(:r, dy)) is reversible if and only if for all f, y E L2(;~ (P f, g) =( f, Pg). It is easy to show that the above definition of reversibility is unchanged if the space L2(,) is replaced with L (xr). So, the Markov chain A is reversible if and only if P is a self-adjoint operator on L (xr). The operator I P, known as the Laplacian operator corresponding to P, pl .i-~ an important role in analyzing Markov chains and we denote this operator by lp. Note that the operator 1p is not one-to-one if there exists f (f 0) E L (x) such that Ip f= 0. But, from ?1. i-n and Tweedie's [1993] Proposition 17.4.1 it follows that for Harris ergodic Markov chains the only functions in L2 K) SatiSfying the equation Ip f = 0 are xr-almost everywhere constant. Since L (xr) does not include functions which are xr-almost everywhere constant (non-zero) and since we consider Harris ergodic Markov chains, the operator Ip is one-to-one in our case. But, Ip might not be an onto operator and so it might not be invertible. From the definition of spectrum of an operator we know that lp is invertible iff 1 / a(P). Since a(P) is a closed set it follows that 1 tf a(P) if and only if supa(P) < 1. Hence the Laplacian operator, lp, is invertible iff supa(P) < 1. Recall from C'!s Ilter 2 that we ;?- a CLT holds for fm (or simply f) if there exists a ,2 E (0, 00) such that, as m 00o, ~(~f Ex f) iN~(0, a"2 It is proved in [13] that a CLT holds for any function f lying in R(lp), the range of lp, and the corresponding .I-i-uspind ic~ variance in the CLT is given by v(f, P) = ||g||2_- l 12, where f = 1pg g EL (jT) So, if Ip is invertible, then CLT holds for every function feL1, (x. ls, from Teore ?,, we know that if ||P|| < 1, then Ip is alrv-ove invertible. Hence, we have the following proposition. Proposition 2. Let P be the M~arkov transition function of a Harris ergodic M~arkov chain with invariant I1:. e. -1 / If ||P || < 1, then CLT holds for,,,, evey untion fe (x .2, Remark 7. Note that, for a Harris ergodic M~arkov chain on a finite state space, lp is rl;, ate, invertible. In this case a(P) contains only the .:II no.;rle;,. of P and 1 tf a(P) because 1 is not an .:II ,.:.;rle;,. of P since we consider P as an operator on Li(xr). So, for a Harris ergodic M~arkov chain on a finite state space, CLT holds for every function fe U\L (x). Since Ip is one-to-one, even in the case when Ip is not onto we still can define the inverse operator ly] if the domain of ly D(lp ), ;?--, is restricted to R(lp) [32]. Fr-om [13] it then follows that CLT holds for all f e D(lfl) and by lemma 3.2 of Mira and G. Or-l [32] we have v(f P) = (f [21p' I]f ), V fe D (lf ). In C'!s Ilter 2, we defined the efficiency ordering due to [32]. The problem with efficiency ordering is that even when we conclude P FE Q, it might happen that v(f, P) = v(f, Q) for every f E L2(;T) that are of interest. In this chapter, we discuss conditions that guarantee v( f D,,. ~ P) < c v( f Q)f r evey eL(x (5-1) for some constant 0 < c < 1 and in this case we ;?-- P is more efficient than Q with uniform relative efficiency c. Also, we ;?i- that P is strictly more efficient than Q if vfC \, P) < v,,f,, Q)fr vry feL () (5-2) Note that (5-1) implies (5-2). But, (5-2) does not necessarily imply (5-1). Let lp and 1Q be the Laplacian operators corresponding to the Markov operators P and Q. If we assume that both lp and 1o are invertible then it follows that D(lp )= L~ (xr) D(lf ). Hence, we have v ( f, P) = ( f, [21-1 I] f), V fe ( x)- and v (j f ) = ( f [21,l I]) fo ), V fL (xr). Proposition 3. Assume that both lp and lq are invertible. Then v( f, P) < c v( f, Q) for every f E L2 K7) ;T C -1_ --1 + I ~~> 0 Proof. The proof follows from following equivalences v(f, P) m (f ,[2) -I]f ) < c(f, [21l If ) V fe L2~ + (f( c21,l I]- 21,1 I) f )>O Vfe L2r + c [21, I]-21,1 I> 0 + 2c ly1 21, + (1 c)I > 0 -1(1 c) W c 9 p + I '> 0. The above proposition is similar to Corollary 5.1 of Mira and G. Or-l [32]. Remark 8. In Proposition 8, if we replace the inequalities by strict inequalities and c by 1, we get conditions for strict e~ff. 7.: ,: ; For the rest of this section we assume that we have reversible Markov chains. We know that the Markov transition operator, P, of a reversible Markov chain is self-adjoint. So, we can use spectral theory to analyze reversible Markov chains. Let v( f, P) =,, 1 AS,7f~f(dA), (5-3) where 8(-) is the spectral decomposition of P. K~ipnis and Varadhan [24] show that if v( f, P) in (5-3) is finite, then there is a CLT for f with the .mon i nd il~lc variance, v( f, P) given by (5-3). Now onwards, we denote 87,7(-) by Syp(-). Recall from Chapter 2 that P is said to be better than Q in the efficiency ordering written P FE Q, if U f, P) I U f, Q) for every f E L2(x). It can be easily shown that the definition of efficiency ordering is unchanged if the space L2 Kr) ;,,1S, repace wth L (x). It is tempting to conclude, from (53) and th~e spectral thleoremn that P >E Q Iiff > ~. But, siinc~e o-(P) c [-1, 1], the function h(x) = is not necessarily bounded on o-(P) (o-(P) C [-1, 1] because ||P|| < 1). Hence we might not be able to use the spectral theorem to define h(P). The following definition is from Mira and G. Or-l [32]. Definition 7. Suppose, P and Q are two M~arkov transition functions with invariant I4~ll.:l..1l...;i measure xr. Then P is said to be better than Q2 in the covariance ordering written P >> Q, if Q > P on L (r) . It is easy to see that any Markov operator, P, dominates I, the identity operator, in Peskun ordering (Peskun [35], Tierney [50]). Also, Tierney [50] shows that Peskun ordering implies covariance ordering. So, P >> I, i.e., lp = I P is a positive operator. From our discussion in the previous section it then follows that there exists a unique square root 1) of Ip. K~ipnis and Varadhan [24] show that v(f, P) < 00 if and only if f E R(lf). Note that, this does not contradict the result of Gordin and Lif~sic [13] since D/l_) C R~lf). Mira, and/,, Geyrl [32]prov that P Fi Q if and only if P FE Q. We give an alternative proof for P >> Q implies P FE Q. The following theorem is from Bendat and Sherman [3] [see also 32, Theorem 4.1]. Theorem 16. Let h(x) be a bounded Borel measurable function on (lI, I2). A necessary and sufficient condition for h to have the 1/* */'' l ;, that h(S) < h(T) for all bounded, self-adjoint operators S, T with S < T and o-(S),o-(T) c (II, I2) iS tati h iS t'il: O (II, I2) Gnd Can it,'.ibil.:. .ill/ be continued into the whole upper-half plane, and represents there an r,:...;let..- function whose ie.:lrl.:..rry part is non-negative. Bendat and Sherman [3] mention that the function ax + b h(x) = with ad bc > 0 cx +d satisfies the condition of Theorem 16 either in x > -d or x < -. We now prove the following theorem. Theorem 17. Suppose that P and Q are two reversible M~arkov transition functions. If P Fi Qc then P FE -c) Proof. For E (0, 1) define 1 + x Comparing with the function h(:r) = @ ,w ae db 2( )an >1 o for e E (0, 1) the function h,(:r) is analytic on a(P) and a(Q) and it satisfies the conditions of Theorem 16. Since, we are assuming that P 1 IQ i.e., Q '> P, by Theorem 16 we have h,(Q) > h,(P) for ee (0, 1). For Ee (0, 1), h,(:r) is also a bounded Borel function on [-1,1]. So we can use the spectral theorem to define the operator h,(P). By spectral theorem, we have (h,(P) f, f) =~~ he(A)Syp~A). Let a+(P) = a(P) 0 [0, 1] and a-(P) = a(P) 0 [-1, 0) then we can write Note that, d 2A For Ae e*a(P), h,(A) > 0 and he(A) ? as e 1 Hence, by 1\lonotone convergence theorem, we have h.(A)Syp(dh) T1 1 /l + (P) + (P) For Ae t (P), he,(A) i \as e 1 and 0 < he,(A) < 1. Since S, 18yp(dAX) < || f ||2< 00 by Dominated convergence theorem we get, /,~ /,"EF~l~~ 1+Al- So, SJ,j h,(A)Syp(dAX) .Jb Sy~A asFI~, e 1. For e e (0, 1), we know that h,(Q) > h,(P) i.e. (ht(Q) f, f) >(h,(P) f, f) Vf fL (xr) 1.e., Then taking the limit e 1 on both sides we get i, 1+A_ P 1+A i \e., v Gf, Q) > vfP) Vf E O\L (x. O Proposition 4. Assume, sup?(a(P)) < 1 and sup?(a(Q)) < 1. Then P >> Q iff P FE -. Proof. It is assumed that sup(a(P)) < 1 and sup(a(Q)) < 1. So the function h.(x) = M is bounded on a(P) and a(Q). Since a(P), e(Q) C (-2, 1) and h(x) is analytic in x < 1, by Theorem 16 we have I +Qd I +P 2> Po > I I-Qd I- P Since a(P) < 1, the function h(x) is continuous on a(P). By Theorem 14, we know that h(P) is a bounded operator. Since we consider the scalar field to be RW, by Theorem 14, wve also know that h(P) is self-adljoint. Let g(x) = Then g(x) is analytic when x > -1. Let max(sup a(P), sup a(Q)) = e'. Then both a(h(P)) and a(h(Q)) are subsets of [0, ( )]. So, we can apply Theorem 16 on g to show I +Qd I +P > 4Q~2> P. I -Q d I- P Hence, we get I+Q I+P S> Po > ~ r -Q r -P The function h(x) = M is bounded on a(P) and a(Q). So, we can use spectral theorem to define the operator h.(P) = Then, byv (5 3) we have I+Q I+P > -- aP FE - I-Q r -P Hence , I +Qd I +P P ~ ~ ~ I-G)~ c ~>~P G) I d I - Now we discuss how theory of compact operators can he used to analyze Markov chains. Let P he the Markov transition function of a Harris ergodic Markov chain. Fr-on Proposition 2 we know that if ||P|| < 1, then CLT holds for every square integrable function f. It is easy to see that if P is a compact operator then ||P|| < 1. So, in order to establish CLT for a Markov chain we can show that the corresponding Markov transition operator, P, is compact. In Section 5.1, we mentioned that any Hilbert Schmidt operator is compact. Recall front ChI Ilter 2 that if a reversible Markov chain is geometrically ergodic, then the CLT holds for every square integrable function f [41]. Also we mentioned before that a reversible Markov chain, P, is geometrically ergodic if and only if ||P|| < 1 [41, 44]. Schervish and Carlin [48] and Liu et al. [26]) have proved geometric ergodicity of certain Markov chains by establishing that the corresponding Markov operator, P, is Hilbert Schmidt. APPENDIX: CHEN AND SHAO'S CONDITIONS Here we state C'I. n! and Shao's [2000] necessary and sufficient conditions for city) < 00 as well as a simple method for checking these conditions. Let X denote the nx p matrix whose ith row is XT and let W denote an n x p matrix whose ith row is WT, where Xi if yi = 0 S-Xi if yi = 1. Proposition 5. [6] The function city) is finite if and only if (i) the design matrix: X has full column rank, and (ii) there exists a vector a = (al,..., a,) with strictly positive components such that W~a = 0 . Assuming that X has full column rank, the second condition of Proposition 5 can be straightforwardly checked with a simple linear program implementable in the R programming language [18] using the -p!'!.I::") function from the "boot" library. Let 1 and J denote a column vector and a matrix of 1s, respectively. The linear program calls for maximizing 1Ta subject to W~a = 0 (J I)a < 1 (element-wise) ai > 0 for i = 1,...,n This is ah-liws feasible (e.g., take a to be a vector of zeros). If the maximizer, call it a*, is such that a,* > 0, for all i = 1,..., n, then the second condition of Proposition 5 is satisfied and cz (y) < 00. Moreover, it is straightforward to show that if a* contains one or more zeros, then there does not exist an a with all positive elements such that W~a = 0, so c (y) = 00. REFERENCES [1] Albert, J. H. and Chib, S. (199:3), "B li-o Io analysis of binary and polychotomous response data," Journal of the American Statistical A~ssociation, 88, 669-679. [2] Arnold, S. F. (1981), The Theory of Litear M~odels and M~ultimerieste A,...el;;-.: Wiley, New York. [:3] Bendat, J. and Sherman, S. (1955), "Monotone and convex operator functions," Transactions of the American Afrathematical S8... .. I ;, 79, 58-71. [4] Casella, G. and George, E. (1992), "Explaining the Gihhs sampler," The American Statistician, 46, 167-174. [5] Chan, K(. S. and G. o c., C. J. (1994), "Discussion of il ..Inl~v chains for Exploring Posterior Distributions"," The Annedls of Shetistic~s, 22, 1747-1757. [6] Chen, M.-H. and Shao, Q.-M. (2000), "Propriety of posterior distribution for dichotomous quantal response models," Proceedings of the American Afrathemati- crel 8... .:. I;, 129, 29:3-302. [7] Chib, S. and Greenberg, E. (1995), "Understanding the Metropolis-Hastings algorithm," The American Shetistician, 49, :327-:335. [8] Conway, J. B. (1990), A C'ourse in Functional Air;,;-.- Springer-Verlag, New York, 2nd ed. [9] Devito, C. L. (1990), Functional A,:tale;,.: and Litear Op~erator The .-; Addison-Wesley Publishing Company. [10] Eaton, 31. L. (1989), Group? Invaricence Applications in Stratistic~s, Hayward, California and Alexandria, Virginia: Institute of Mathematical Statistics and the American Statistical Association. [11] Feller, W. (1968), An Introduction to P,~rol~.:l..7.;i Theory and its Applications, vol. I, New York: John Wiley & Sons, :$rd ed.
[12] Fernandez, C. and Steel, 31. F. J. (1999), \!ll1 variate Student-t regression models:
Pitfalls and inference," Biometrikes, 86, 15:3167.
[1:3] Gordin, 31. I. and Lifsic, B. A. (1978), "The central limit theorem for stationary
Markov processes," Soviet Iabthematic~s. D -11.; Jr;, 19, :392-394.
[14] Hohert, J. P. (2001), "Discussion of "The art of data augmentation" by D.A. van Dyk
and X.-L. Meng," Joureml of C'ompubstional and Grap~hical Stratistic~s, 10, 59-68.
[15] Hohert, J. P. and Geyer, C. J. (1998), "Geometric Ergodicity of Gihhs and Block
Gihhs Samplers for a Hierarchical Random Effects Model," Journal of Afultivaricate
A,...el;;-.: 67, 414-430.
[16] Hobert, J. P., Jones, G. L., Presnell, B., and Rosenthal, J. S. (2002), "On the
applicability of regenerative simulation in Markov chain Monte Carlo," Biometrika,
89, 731-743.
[17] Hobert, J. P. and Marchev, D. (2008), "A theoretical comparison of the data
augmentation, marginal augmentation and PX-DA algorithms," The Annals of
Statistics, 36, 532-554.
[18] Ihaka, R. and Gentleman, R. (1996), "R: A Language for Data Analysis and
Graphics," Journal of Comp~utational and Grap~hical Statistics, 5, 299-314.
[19] Johnson, N. L. and K~otz, S. (1970), Continuous Univariate Distributions-1, John
Wiley & Sons.
[20] Jones, G. L. (2004), "On the Markov chain central limit theorem," Pir..ort.7:/i;,
Surveys, 1, 299-320.
[21] Jones, G. L., Haran, M., Caffo, B. S., and Neath, R. (2006), "Fixed-width output
analysis for Markov chain Monte Carlo," Journal of the American Statistical Associa-
tion, 101, 1537-1547.
[22] Jones, G. L. and Hobert, J. P. (2001), "Honest exploration of intractable probability
distributions via Markov chain Monte Carlo," Statistical Science, 16, 312-34.
[23] -(2004), "Sufficient burn-in for Gibbs samplers for a hierarchical random effects
model," The Annals of Statistics, 32, 784-817.
[24] K~ipnis, C. and Varadhan, S. R. S. (1986), "Central limit theorem for additive
functionals of reversible Markov processes and applications to simple exclusions,"
Communications in M~athematical Ph ;; .~ ; 104, 1-19.
[25] Liu, J. S., Wong, W. H., and K~ong, A. (1994), "Covariance Structure of the Gibbs
Sampler with Applications to Comparisons of Estimators and Augmentation
Schemes," Biometrika, 81, 27-40.
[26] -(1995), "Covariance Structure and Convergence Rate of the Gibbs Sampler with
Various Scans," Journal of the Roriarl Statistical 8... .:. It; Series B, 57, 157-169.
[27] Liu, J. S. and Wu, Y. N. (1999), "Parameter Expansion for Data Augmentation,"
Journal of the American Statistical Association, 94, 1264-1274.
[28] Marchev, D. and Hobert, J. P. (2004), "Geometric ergodicity of van Dyk and Meng's
algorithm for the multivariate Student's t model," Journal of the American Statistical
Association, 99, 228-238.
[29] Mardia, K(., K~ent, J., and Bibby, J. (1979), M~ultivariate A,...el;;-.: London: Academic
press.
[:30] ?1n i-n, S. P. and Tweedie, R. L. (199:3), Iabrkov C'hain~s and Stochrastic St.,t..:1i;,
London: Springfer Verlagf.
[:31] -(1994), "Computable bounds for geometric convergence rates of Markov chains,"
The Annals of Applied P ~rol~.:l..7.;; 4, 981-1011.
[:32] Mira, A. and Geyer, C. J. (1999), "Ordering Monte Carlo Markov chains," Tech. Rep.
No. 6:32, School of Statistics, University of Minnesota.
[:33] Mykland, P., Tierney, L., and Yu, B. (1995), "Regeneration in Markov chain
Samplers," Journal of the American Statistical Asmsociation, 90, 2:3:341.
[:34] Nummelin, E. (1984) General Irreducible Iabrkov C'hain~s and Non-negative Op~era-
tors, London: Cambridge University Press.
[:35] Peskun, P. H. (197:3), "Optimum Monte Carlo sampling using Markov chains,"
Biometrikes, 60, 607-612.
[:36] R Development Core Team (2006), R: A Litr,:l;,rll. and Environment for Statistical
C'or,,ly-l.::l R Foundation for Statistical Computing, Vienna, Austria.
[:37] Retherford, J. R. (199:3) Hilbert Space: C'ompact Op~erators and the Trace theorem,
Cambridge University Press.
[:38] Robert, C. and Casella, G. (2004), M~onte Caerlo Statistical M~ethods, Springer, New
York.
[:39] Robert, C. P. (1995), "Simulation of truncated normal variables," Stratistic~s and
[40] Roberts, G. and Tweedie, R. (1999), "Bounds on regeneration times and convergence
rates for Markov chains," Stocheastic Processes and their Applications, 80, 221-229.
Corrigendum (2001) 91: 3:37-:338.
[41] Roberts, G. O. and Rosenthal, J. S. (1997), "Geometric ergodicity and hybrid Markov
chains," Electronic C'otmunications in Pr o~l~.:l..7.;; 2, 1:325.
[42] -(2001), :\1 I) 1:0v C'I !.1.4 and de-initializing processes," Scandinavian Journal of
Shetistic~s, 28, 489-504.
[4:3] -(2004), "General State Space Markov ('1. !.1.4 and MC' lC Algorithms," Pr o~l~.,t.7. /
Surveys, 1, 20-71.
[44] Roberts, G. O. and Tweedie, R. L. (2001), "Geometric L2 and L1 convergence are
equivalent for reversible Markov chains," Jourmal of Applied P l~r l~.:l..7.;; :38A, :37-41.
[45] Rosenthal, J. S. (1995), il!s...il. II!. .1, Conditions and Convergence Rates for Markov
C'I I!1, Monte Carlo," Journal of the American Statistical A~ssociation, 90, 558-566.
[46] Roy, V. and Hohert, J. P. (2007), "Convergence rates and .I-i nspinile; 1 standard errors
for MC1|LC algorithms for B li- -i Ils profit regression," Journal of the Roriarl Statistical
S .. .:. It; Series B, 69, 607-623.
[47] Rudin, W. (1991), Functional A,:tale;,.: McGraw-Hill, 2nd ed.
[48] Schervish, 31. J. and Carlin, B. P. (1992), "On the Convergence of Successive
Substitution CI .visplilrs Joureml of C'omputational and Grap~hical Stratistic~s, 1,
111-127.
[49] Tierney, L. (1994), 11 I) In~v chains for exploring posterior distributions (with
discussion)," The Annedls of Shetistic~s, 22, 1701-1762.
[50] -(1998), "A Note on Metropolis-Hastings K~ernels for General State Spaces," The
Annals of Applied Pr o~l~.:l..7.;; 8, 1-9.
[51] van Dyk, D. A. and Meng, X.-L. (2001), "The Art of Data Augmentation (with
Discussion) ," Jourmal of C'omputatiomel and Grap~hical Stratistic~s, 10, 1-50.
BIOGRAPHICAL SKETCH
Mr. Vivekananda Roy was born in 1980, in West Bengal, India. He spent his
childhood in his ancestral village, Bachhipur, where he went to school. After passing the
higher secondary examination, he moved to Calcutta in 1998. He received his bachelor's
degree in statistics from the University of Calcutta in 2001. He then joined the Indian
Statistical Institute, from where he received his Master of Statistics degree in 2003. He
joined the graduate program of the Department of Statistics at University of Florida in fall
2003. Upon graduation from UF, he will join the Department of Statistics at lowa State
University, as an Assistant Professor.
PAGE 1
1
PAGE 2
2
PAGE 3
3
PAGE 4
PAGE 5
page ACKNOWLEDGMENTS ................................. 4 LISTOFTABLES ..................................... 6 ABSTRACT ........................................ 7 CHAPTER 1INTRODUCTION .................................. 9 2MARKOVCHAINBACKGROUND ........................ 18 3BAYESIANPROBITREGRESSION ........................ 24 3.1Introduction ................................... 24 3.2GeometricConvergenceandCLTsfortheACAlgorithm .......... 27 3.3ComparingtheACandPX-DAAlgorithms ................. 35 3.4ConsistentEstimatorsofAsymptoticVariancesviaRegeneration ...... 37 4BAYESIANMULTIVARIATEREGRESSION 48 4.1Introduction ................................... 48 4.2ProofofPosteriorPropriety .......................... 50 4.3TheAlgorithms ................................. 58 4.3.1DataAugmentation ........................... 58 4.3.2HaarPX-DAAlgorithm ......................... 60 4.4GeometricErgodicityoftheAlgorithms .................... 61 5SPECTRALTHEOREMANDORDERINGOFMARKOVCHAINS ...... 71 5.1SpectralTheoryforNormalOperators .................... 71 5.2ApplicationofSpectralTheorytoMarkovChains .............. 81 APPENDIX:CHENANDSHAO'SCONDITIONS ................... 89 REFERENCES ....................................... 90 BIOGRAPHICALSKETCH ................................ 94 5
PAGE 6
Table page 3-1ResultsbasedonR=100regenerations ...................... 47 6
PAGE 7
7
PAGE 8
8
PAGE 9
PAGE 10
2exp 10
PAGE 11
2exp 2,theposteriordensitytakesthefollowingform(;jy)=1 2exp 2wherec2(y)isthemarginaldensityofygivenbyc2(y)=ZWZRdknYi=1Z10d 2exp 2dd;whereWRd(d+1) 2isthesetofddpositivedenitematrices.Inchapter4,weprovidenecessaryandsucientconditionsforc2(y)<1.Asinthepreviousexample,posteriorexpectationswithrespecttotheposteriordensity,(;jy),arenotavailableinclosed-form.Wenowdiscussdierentcomputationalmethodsthatcanbeusedtoapproximate( 1{1 ).Thesecomputationalmethodsarebroadlyoftwotypes,namely,numericalintegrationmethodsandsimulationbasedmethods.Ifthedimension,p,isnotlarge, 11
PAGE 12
PAGE 13
1{1 )bysimulatingaMarkovchainwithstationarydistribution.ThisisthebasicprincipleofMarkovchainMonteCarlo(MCMC)method.ThemostgeneralalgorithmforproducingMarkovchainswitharbitrarystationarydistributionistheMetropolis-Hastings(M-H)algorithm.AsimpleintroductiontotheM-HalgorithmisgiveninChibandGreenberg[ 7 ].Another 13
PAGE 14
4 ].Supposethep-dimensionalvectorin( 1{1 )canbewrittenas=(1;2;:::;p).ThesimplestGibbssampler(but,notthegeneralGibbssampler)requiresonetobeabletosimulatefromallunivariatefullconditionaldensitiesofi.e.,itisrequiredtosimulatefromtheconditionaldistributions,ijfj;j6=igfori=1;2;:::;p.ItisalsopossibletocreateahybridalgorithmwhichusesdierentversionsofM-HalgorithmtogetherwithGibbssamplertoconstructaMarkovchainwithstationarydistribution.Asourdiscussionsuggests,thereisaplethoraofMarkovchainswithstationarydistribution.InordertochoosebetweenMCMCalgorithms,weneedanorderingofMarkovchainshavingthesamestationarydistribution.InChapters2and5,wedescribedierentpartialorderingsofMarkovchains.LetfXjg1j=0denotetheMarkovchainassociatedwithanMCMCalgorithmthatisusedtoexplore.IffXjg1j=0isHarrisergodic(denedinChapter2),theergodictheoremimpliesthat,nomatterwhatthedistributionofthestartingvalue,X0, 14
PAGE 15
PAGE 16
AlbertandChib 's[ 1993 ]dataaugmentationalgorithmand LiuandWu 's[ 1999 ]PX-DAalgorithm.Westudytheconvergencerateofthesealgorithmsandprovetheexistenceofcentrallimittheorems(CLTs)forergodicaveragesunderasecondmomentcondition.WecomparethesetwoalgorithmsandshowthatthePX-DAalgorithmshouldalwaysbeusedsinceitismoreecientthantheotheralgorithminthesenseofhavingsmallerasymptoticvarianceinthecentrallimittheorem(CLT).Asimple,consistentestimatoroftheasymptoticvarianceintheCLTisconstructedusingregenerativesimulationmethods.InChapter4,weconsiderBayesianmultivariateregressionmodelswherethedistributionoftheerrorsisascalemixtureofnormals.Wenoticedbeforethatifthestandardnoninformativepriorisusedontheparameters(;),thenposteriorexpectationswithrespecttothecorrespondingposteriordensity,(;jy),arenotavailableinclosed-form.WedeveloptwoMCMCalgorithmsthatcanbeusedtoexplorethedensity(;jy).ThesealgorithmsarethedataaugmentationalgorithmandtheHaarPX-DAalgorithm.Wecomparethetwoalgorithmsandstudytheirconvergerates.Wealsoprovidenecessaryandsucientconditionsfortheproprietyoftheposteriordensity,(;jy).WhileinChapters3and4,weusedprobabilistictechniquestoanalyzedierentMCMCalgorithms,itispossibletotakeafunctionalanalyticapproachtostudyandcomparedierentMarkovchains.InChapter5,wegiveabriefoverviewofsomeresultsfromfunctionalanalysis.Inparticular,wediscussthespectraltheoremforbounded, 16
PAGE 17
17
PAGE 19
PAGE 20
PAGE 21
Rosenthal 's[ 1995 ]Theorem12showsthattheabovedriftandminorizationconditions,together,implythatisgeometricallyergodic.InChapter4,weemploydriftandminorizationconditionstoprovethegeometricergodicityofthedataaugmentationalgorithmusedinBayesianmultivariateStudent'stregressionproblem.Oneadvantageofprovinggeometricergodicityofbyestablishingtheabovedriftandminorizationconditionsisthatusing Rosenthal 's[ 1995 ]Theorem12,wealsocancalculateanupperboundofM(x)min( 2{1 ).Thisupperboundcanbeusedtocomputeanappropriateburn-inperiod(JonesandHobert[ 23 ],MarchevandHobert[ 28 ]).ThereareothermethodsofprovinggeometricergodicityofaMarkovchainthatdonotprovideanyquantitativeboundofM(x)min( 2{1 ).Wedescribeonesuchmethodnow.WewillassumethatXisequippedwithalocallycompact,separable,metrizabletopologywithB(X)astheBorel-eld.AfunctionV:X![0;1)issaidtobeunboundedocompactsetsifforevery>0,thelevelsetfx:V(x)giscompact.TheMarkovchainissaidtobeaFellerchainif,foranyopensetO2B(X),P(;O)isalower-semicontinuousfunction.Thefollowingpropositionisaspecialcaseof MeynandTweedie 's[ 1993 ]Lemma15.2.8.
PAGE 22
1 toestablishgeometricergodicityofMCMCalgorithmsusedinBayesianprobitregressionproblem.HobertandGeyer[ 15 ]employedProposition 1 toestablishthegeometricergodicityofGibbssamplersassociatedwithBayesianhierarchicalrandomeectsmodels.Noticethat,unlikeProposition 1 ,thedriftconditionin Rosenthal 's[ 1995 ]Theorem12doesnotrequirethedriftfunction,V,tobeunboundedocompactsets.Also, Rosenthal 's[ 1995 ]Theorem12doesnotneedtobeaFellerchain.ThedrivingforcebehindMCMCistheergodictheorem,whichissimplyaversionofthestronglawthatholdsforwell-behavedMarkovchains;e.g.,HarrisergodicMarkovchains.Indeed,supposethatf:X!RissuchthatRXjfjd<1anddeneEf=RXfd.Thentheergodictheoremsaysthattheaverage 41 ].(FormoreontheCLTinMCMC,seeChanandGeyer[ 5 ],MiraandGeyer[ 32 ],Jones[ 20 ]andJonesetal.[ 21 ].)ForathoroughdevelopmentofgeneralstatespaceMarkovchaintheory,seeNummelin[ 34 ]andMeynandTweedie[ 30 ].RobertsandRosenthal[ 43 ]providesaconcise,self-containeddescriptionongeneralstatespaceMarkovchains(alsoseeTierney[ 49 ]).AsmentionedinChapter1,foragivendistributionfunction,,therearelargenumberofMCMCalgorithmswithstationarydistribution.Onewaytoorderthese 22
PAGE 23
PAGE 24
6 ]providenecessaryandsucientconditionsonyandfxigni=1forc1(y)<1andtheseconditionsarestatedexplicitlyintheAppendix.Whentheseconditionshold,theposteriordensityofiswelldened(i.e.,proper)andisgivenby(jy)=1 AlbertandChib 's[ 1993 ]dataaugmentationalgorithm,whichwenowdescribe.LetXdenotethenpdesignmatrixwhoseithrowisxTiand,forz=(z1;:::;zn)T2Rn,let^=^(z)=(XTX)1XTz.Also,letTN(;2;w)denoteanormaldistributionwith 24
PAGE 25
(i) Drawz1;:::;znindependentlywithziTN(xTi;1;yi) (ii) Draw0Np^(z);(XTX)1AlbertandChib[ 1 ]hasbeenreferencedover350times,whichshowsthattheACalgorithmanditsvariantshavebeenwidelyappliedandstudied.ThePX-DAalgorithmofLiuandWu[ 27 ]isamodiedversionoftheACalgorithmthatalsosimulatesaMarkovchainwhoseinvariantdensityis(jy).AsingleiterationofthePX-DAalgorithmentailsthefollowingthreesteps: (i) Drawz1;:::;znindependentlywithziTN(xTi;1;yi) (ii) Drawg2Gamman 2Pni=1zixTi(XTX)1XTz2andsetz0=(gz1;:::;gzn)T Draw0Np^(z0);(XTX)1NotethattherstandthirdstepsofthePX-DAalgorithmarethesameasthetwostepsoftheACalgorithmso,nomatterwhatthedimensionof,thedierencebetweentheACandPX-DAalgorithmsisjustasingledrawfromtheunivariategammadistribution.Fortypicalvaluesofnandp,theeortrequiredtomakethisextraunivariatedrawisinsignicantrelativetothetotalamountofcomputationneededtoperformoneiterationoftheACalgorithm.Thus,thetwoalgorithmsarebasicallyequivalentfromacomputationalstandpoint.However,LiuandWu[ 27 ]andvanDykandMeng[ 51 ]bothprovideconsiderableempiricalevidencethatautocorrelationsdiedownmuchfasterunderPX-DAthanunderAC,whichsuggeststhatthePX-DAalgorithm\mixesfaster"thantheACalgorithm.(LiuandWu[ 27 ]alsoestablishedatheoreticalresultalongtheselines-seetheproofofourCorollary 1 .) 25
PAGE 26
3{1 )isbyestablishinggeometricergodicityoffjg1j=0.Inthischapter,weprovethattheMarkovchainsunderlyingtheACandPX-DAalgorithmsbothconvergeatageometricratewhichimpliesthattheCLTin( 3{1 )holdsforeveryf2L2(jy);thatis,foreveryfsuchthatRRpf2()(jy)d<1.WealsoestablishthatPX-DAistheoreticallymoreecientthanACinthesensethattheasymptoticvarianceintheCLTunderthePX-DAalgorithmisnolargerthanthatundertheACalgorithm.Regenerativemethodsareusedtoconstructasimple,consistentestimatoroftheasymptoticvarianceintheCLT.Asanillustration,weapplyourresultsto vanDykandMeng 's[ 2001 ]lupusdata.Inthisparticularexample,theestimatedasymptoticrelativeeciencyofthePX-DAalgorithmwithrespecttotheACalgorithmisabout65.Hence,eventhoughtheACandPX-DAalgorithmsareessentiallyequivalentintermsofcomputationalcomplexity,hugegainsineciencyarepossiblebyusingPX-DA. 26
PAGE 27
3.2 and 3.3 ,respectively.InSection 3.4 wederiveresultsthatallowfortheconsistentestimationofasymptoticvariancesviaregenerativesimulation.
PAGE 28
34 ,Theorem3.8].Supposehisabounded,harmonicfunction.SincetheACalgorithmis-irreducibleandhasaninvariantprobability 28
PAGE 29
34 ,Proposition3.13].Thus,thereexistsasetNwith(N)=0suchthath()=cforall2 2 itfollowsthatergodictheoremholdsforit.Thefollowingtheoremisthemainresultofthissection. Proof. 1 .WehaveshownthatACalgorithmis-irreducibleandaperiodic,wheredenotetheLebesguemeasureonRp.So,ifisamaximalirreducibilitymeasurefortheMarkovchainunderlyingtheACalgorithm,then.Conversely,if(A)=0,thenKm(;A)=0forall2Rpandallm2N,whichimpliesthat(A)=0anditfollowsthat.Hence,.Sincethesupportofobviouslyhasnon-emptyinterior,itfollowsthatthesupportofamaximalirreducibilitymeasurefortheACalgorithmhasnon-emptyinterior.WenowdemonstratethattheMarkovchainassociatedwiththeACalgorithmisaFellerchain.LetandOdenoteapointandanopensetinRp,respectively.Assumethatfmg1m=1;isa(deterministic)sequenceinRpwithm6=suchthatm!asm!1.TwoapplicationsofFatou'sLemmainconjunctionwiththefactthat(zj;y)iscontinuousinyieldliminfm!1K(m;O)ZOliminfm!1k(0jm)d0=ZOliminfm!1"ZRm(0jz;y)(zjm;y)dz#d0
PAGE 30
1 withdriftfunctionV()=(X)T(X).RecallthatXisassumedtohavefullcolumnrank,p,andhenceXTXispositivedenite.Thus,foreach>0,thesetf2Rp:V()g=f2Rp:T(XTX)giscompactsothefunctionVisunboundedocompactsets.Now,notethat(KV)()=ZRpV(0)k(0j)d0=ZRp"ZRnV(0)(0jz;y)d0#(zj;y)dz=ZRpEV(0)jz;y(zj;y)dz=EnEV(0)z;y;yo;where,asthenotationsuggests,theexpectationsinthelasttwolinesarewithrespecttotheconditionaldensities(0jz;y)and(zj;y).Recallthat(0jz;y)isap-dimensionalnormaldensityand(zj;y)isaproductoftruncatednormals.Evaluatingtheinsideexpectation,wehaveEV(0)z;y=E(0)TXTX0z;y=tr(XTX(XTX)1)+zTX(XTX)1(XTX)(XTX)1XTz=p+zTX(XTX)1XTzp+zTz;
PAGE 31
19 ]implythatifUTN(;1;1)then,E(U2)=1+2+() ();where()withonlyasingleargumentdenotesthestandardnormaldensityfunction;thatis,(v)isequivalentto(v;0;1).Similarly,ifUTN(;1;0)then,E(U2)=1+2() 1():ItfollowsthatEz2ij;y=8><>:1+(xTi)2+(xTi)(xTi) (xTi)ifyi=11+(xTi)2(xTi)(xTi) 1(xTi)ifyi=0:Amorecompactwayofexpressingthisisasfollows: Ez2ij;y=1+(wTi)2(wTi)(wTi) 1(wTi);(3{2)wherewiisdenedintheAppendix.Hence,wehave (KV)()=EnEV(0)z;y;yop+n+nXi=1(wTi)2nXi=1(wTi)(wTi) 1(wTi):(3{3)Recallthatthegoalistoshowthat(KV)()V()+Lforall2Rp.Itfollowsfrom( 3{3 )that(KV)(0)p+n.Wenowconcentrateon2Rpnf0g.WebeginbyconstructingapartitionofthesetRpnf0gusingthenhyperplanesdenedbywTi=0.Forapositiveintegerm,deneNm=f1;2;:::;mg.LetA1;A2;:::;A2ndenoteallthesubsetsofNn,and,foreachj2N2n,deneacorresponding 31
PAGE 32
[2nj=1Sj=Rpnf0g,and 5 areinforce,thereexiststrictlypositiveconstantsfaigni=1suchthata1wT1+a2wT2++anwTn=0:Therefore, 3{4 )impliesthattheremustalsoexistani06=isuchthatwTi0andwTihaveoppositesigns.Thus,AjandAjarebothnonempty.NowdeneC=j2N2n:Sj6=;.Foreachj2C,deneRj()=Pi2Aj(wTi)2 11 ,p.175].Also,itisclearthatif 32
PAGE 33
1(u);thenM2(0;1).Fixj2C.Itfollowsfrom( 3{3 )andtheresultsconcerningMillsratiothatforall2Sj,wehave(KV)()p+n+nXi=1(wTi)2Xi2Aj(wTi)(wTi) 1(wTi)Xi2 1(wTi)p+n+nXi=1(wTi)2+Xi2Aj(wTi)(wTi) 1(wTi)Xi2 1(wTi)p+n+nXi=1(wTi)2+nMXi2
PAGE 35
PAGE 36
10 ]forbackgroundonleftgroupactionsandmultipliers.)LetZdenotethesubsetofRninwhichzlives;i.e.,ZistheCartesianproductofnhalf-lines(R+andR),wheretheithcomponentisR+ifyi=1andRifyi=0.Fixz2Z.ItiseasytoseethatStep2ofthePX-DAalgorithmisequivalenttothetransitionz!gzwheregisdrawnfromadistributiononGhavingdensityfunction 17 ].First,theirProposition3showsthatR(z;dz0)isreversiblewithrespectto(zjy)anditfollowsthatk(0j)isreversiblewithrespectto(jy).WenowusethefactthattheACalgorithmisgeometricallyergodictoestablishthatthePX-DAalgorithmenjoysthispropertyaswell. Proof. 25 32 ].DenotethenormsoftheseoperatorsbykKkandkKk.Ingeneral,areversible,HarrisergodicMarkovchainisgeometricallyergodicifandonlyifthenormoftheassociatedMarkovoperator 36
PAGE 37
41 44 ].ByTheorem 1 ,theACalgorithmisgeometricallyergodicandconsequentlykKk<1.ButLiuandWu[ 27 ]showthatkKkkKk[seealso 17 ,Theorem4]andhencekKk<1,whichimpliesthatthePX-DAalgorithmisalsogeometricallyergodic. WehavenowshownthattheMarkovchainsunderlyingtheACandPX-DAalgorithmsarebothreversibleandgeometricallyergodicandhencebothhaveCLTsforallf2L2(jy).WenowuseanotherresultfromHobertandMarchev[ 17 ]toshowthatthePX-DAalgorithmisatleastasecientastheACalgorithm. Proof. HobertandMarchev 's[ 2008 ]Theorem4. Inordertoputourtheoreticalresultstouseinpracticetocomputevalidasymptoticstandarderrors,werequireaconsistentestimatoroftheasymptoticvarianceandthisisthesubjectofthenextsection. 3 .Ofcourse,themarginalchainfjg1j=0hastheMarkovtransitiondensityk(0j)nomatterwhichversionoftheMarkovtransitiondensitywechooseforthejointchain.Whilethisisobviousforthechaincorresponding~~k,itcanbeeasilyshownfor~kbyconsideringtwoconsecutivesteps 37
PAGE 38
42 ]canbeusedtoshowthatthejointchainfj;zjg1j=0inheritsgeometricergodicityfromitsmarginalchainfjg1j=0.Notethatfj;zjg1j=0isthechainthatisactuallysimulatedwhentheACalgorithmisrun(wejustignorethezjs).Supposewecanndafunctions:RpRn![0;1],whoseexpectationwithrespectto(;zjy)isstrictlypositive,andaprobabilitydensityd(0;z0)onRpRnsuchthatforall(0;z0);(;z)2RpRn,wehave ~k0;z0j;zs(;z)d(0;z0):(3{6)Thisiscalledaminorizationcondition[ 22 30 43 ]anditcanbeusedtointroduceregenerationsintotheMarkovchaindrivenby~k.Theseregenerationsarethekeytoconstructingasimple,consistentestimatorofthevarianceintheCLT.Afterexplainingexactlyhowthisisdone,wewillidentifysanddforbothACandPX-DA.Equation( 3{6 )allowsustorewrite~kasthefollowingtwo-componentmixturedensity ~k0;z0j;z=s(;z)d(0;z0)+1s(;z)r0;z0j;z;(3{7) 38
PAGE 39
1s(;z);whens(;z)<1(anddenedarbitrarilywhens(;z)=1).InsteadofsimulatingtheMarkovchainfj;zjg1j=0intheusualwaythatalternatesbetweendrawsfrom(zj;y)and(jz;y),wecouldsimulatethechainusingthemixturerepresentation( 3{7 )asfollows.Supposethecurrentstateis(j;zj)=(;z).First,wedrawjBernoullis(;z).Thenifj=1,wedraw(j+1;zj+1)fromd,andifj=0,wedraw(j+1;zj+1)fromtheresidualdensity.The(random)timesatwhichj=1correspondtoregenerationsinthesensethattheprocessprobabilisticallyrestartsitselfatthenextiteration.Morespecically,supposewestartbydrawing(0;z0)d.Theneverytimej=1,wehave(j+1;zj+1)dsotheprocessis,ineect,startingoveragain.Furthermore,the\tours"takenbythechaininbetweentheseembeddedregenerationtimesareiid,whichmeansthatstandardiidtheorycanbeusedtoanalyzetheasymptoticbehaviorofergodicaverages,therebycircumventingthedicultiesassociatedwithanalyzingaveragesofdependentrandomvariables.Formoredetailsandsimpleexamples,seeMyklandetal.[ 33 ]andHobertetal.[ 16 ].Inpractice,wecanevenavoidhavingtodrawfromr(whichcanbeproblematic)simplybydoingthingsinaslightlydierentorder.Indeed,giventhecurrentstate(j;zj)=(;z),wedraw(j+1;zj+1)intheusualway(thatis,bydrawingfrom(zj;y)and(jz;y))afterwhichwe\llin"avalueforjbydrawingfromtheconditionaldistributionofjgiven(j;zj)and(j+1;zj+1),whichisjustaBernoullidistributionwithsuccessprobabilitygivenby ~k(j+1;zj+1jj;zj):(3{8) 39
PAGE 40
N=1 16 ]areapplicableandimplythat,aslongasthereexistsan>0suchthatEjf()j2+jy<1,then N2: 3{9 )isslightlydierentfromtheCLTdiscussedearlier,whichtakestheformp
PAGE 41
PAGE 42
(2)p 2exp1 2^(z)TXTX^(z):where^(z)=(XTX)1XTzThus,s(z)="inf2D(jz;y) 2^(z)TXTX^(z)2TXTX^(z) 2^(z)TXTX^(z)2TXTX^(z)="exp1 2zTX(XTX)1XTz 2zTX(XTX)1XTzinf2Dexp(zz)TX="exp1 2zTX(XTX)1XTz 2zTX(XTX)1XTzexppXi=1citiIR+(ti)+ditiIR(ti)
PAGE 43
PAGE 44
PAGE 45
~k(j+1;(zj+1;gj+1)jj;(zj;gj))=inf2D(jgjzj;y) 2g2jzTjX(XTX)1XTzj 2zTX(XTX)1XTzexppXi=1cit(j)iIR+(t(j)i)+dit(j)iIR(t(j)i)exp1 2zTX(XTX)1XTz 2g2jzTjX(XTX)1XTzjexp(gjzjz)TXj+1ID(j+1)=exp(pXi=1cit(j)iIR+(t(j)i)+dit(j)iIR(t(j)i)t(j)ij+1;i)ID(j+1)wheret(j)T=(gjzjz)TX.Theorem 2 statesthattheasymptoticvarianceintheCLTforthePX-DAalgorithmisnolargerthanthatfortheACalgorithm;i.e,0<2f;k2f;k<1forallf2L2(jy).However,weknowfromRemark 1 thattheregenerativemethodisbasedonaslightlydierentCLTwhoseasymptoticvariancehasanextrafactorinvolvingthesmallfunction,namelyE(s()jy),fromtheminorizationcondition.AlthoughthesmallfunctionsinthetwominorizationconditionsthatwederivedfortheACandPX-DAalgorithmsareslightlydierent,E(s(:)jy)remainsexactlythesameasshown 45
PAGE 46
2(n2)=2(n=2)ZRnZR+z0T(IH)z0n vanDykandMeng 's[ 2001 ]lupusdata,whichconsistsoftriples(yi;xi1;xi2),i=1;:::;55,wherexi1andxi2arecovariatesindicatingthelevelsofcertainantibodiesintheithindividualandyiisanindicatorforlatentmembranouslupusnephritis(1forpresenceand0forabsence).vanDykandMeng[ 51 ]consideredthemodelPr(Yi=1)=0+1xi1+2xi2;withaatprioron.Weusedalinearprogram(thatisdescribedintheAppendix)toverifythat ChenandShao 's[ 2000 ]necessaryandsucientconditionsforproprietyaresatisedinthiscase. 46
PAGE 47
39 ]togenerateone-sidedtruncatednormalrandomvariables.WeranACandPX-DAforR=100regenerationseach.Thistook1,223,576iterationsforACand1,256,677iterationsforPX-DA.WeusedthesimulationstoestimatetheposteriorexpectationsoftheregressionparametersandtheresultsareshowninTable 3-1 .(ResultsinChenandShao[ 6 ]implythatthereexists>0suchthatEjjj2+jy<1forj2f0;1;2g.)ItisstrikingthattheestimatedasymptoticvariancesfortheACalgorithmareallatleast65timesaslargeasthecorrespondingvaluesforthePX-DAalgorithm.Theseestimatessuggestthat,inthisparticularexample,theACalgorithmrequiresabout65timesasmanyiterationsasthePX-DAalgorithmtoachievethesamelevelofprecision.(Weactuallyrepeatedtheentireexperimentseventimesandtheestimatesof2f;k=2f;krangedbetween40and145.) Table3-1. ResultsbasedonR=100regenerations ACAlgorithmPX-DAAlgorithm Parameter estimates.e.^f;k=p -3.0180.012 66.61 6.9160.023 66.92 3.9820.015 63.1 47
PAGE 48
2exp 4{1 )asy=X+"wherey=(y1;:::;yn)Tisthendmatrixofobservations,X=(x1;x2;:::;xn)Tisthenkmatrixofcovariatesand"=("1;:::;"n)Tisthendmatrixoferrorvariables.Thelikelihoodfunctionfortheregressionmodelin( 4{1 )isgivenbyf(yj;)=nYi=1Z10d 2exp
PAGE 49
2.Theposteriordensitytakesthefollowingform 2isthesetofddpositivedenitematrices.FernandezandSteel[ 12 ]provedthatc2(y)<1ifandonlyifnd+k.Insection 4.2 ,wegiveanalternativeproofoftheposteriorpropriety.Abyproductofourproofisamethodofexactsamplingfrom(;jy)intheparticularcasewhennisexactlyd+k.Throughoutthischapterweassumethatnd+k.Wealsoassumethatthecovariatematrix,X,isoffullcolumnranki.e.,r(X)=k.Theposteriordensityin( 4{3 )isintractableinthesensethatposteriorexpectationsarenotavailableinclosed-form.Also,ourexperienceshowsthatitisdiculttodevelopausefulprocedureformakingi.i.d.drawsfrom(;jy).InthischapterwefocusonMCMCmethodsforexploringtheposteriordensityin( 4{3 ).Wedevelopadataaugmentation(DA)algorithmfor(;jy)insection 4.3.1 .IthasbeennoticedintheliteraturethatthestandardDAalgorithmoftensuersfromslowconvergence[ 51 ].EmpiricalandtheoreticalstudieshaveshownthatalternativealgorithmsthataremodiedversionsofthestandardDAalgorithmsuchastheHaarPX-DAalgorithmandthemarginalaugmentationalgorithmoftenprovidehugeimprovementoverthestandardDAalgorithm(LiuandWu[ 27 ],vanDykandMeng[ 51 ],RoyandHobert[ 46 ],HobertandMarchev[ 17 ]).Insection 4.3.2 ,wedeveloptheHaarPX-DAalgorithmfortheposteriordensityin( 4{3 ).Wethenspecializetothecasewhentheerrors,"i's,haveaStudent'stdistributioni.e.,themixingdistribution,H(),in( 4{2 )isaGamma( 49
PAGE 50
17 ]wealsoconcludethattheHaarPX-DAalgorithmisatleastasecientasthedataaugmentationalgorithminthesensethattheasymptoticvariancesinthecentraltheoremunderHaarPX-DAalgorithmareneverlargerthanthoseundertheDAalgorithm.SomeoftheseresultsaregeneralizationsofresultsfromvanDykandMeng[ 51 ]andMarchevandHobert[ 28 ]whoconsideredthespecialcasewheretherearenocovariatesintheregressionmodel( 4{1 ),i.e.,X=(1;1;:::;1)TandHisGamma( 4.3 ,wedescribetheDAandtheHaarPX-DAalgorithms.Inthelastsection,wecomparethetwoalgorithmsandprovethatboththealgorithmsconvergeatageometricrate. 50
PAGE 51
4{1 )and( 4{2 )itfollowsthat (2)nd 2:
PAGE 52
2 ,Chapter17] 2 ]Supposeisanmrmatrix,andaremmandrrnon-negativedenitematrices.WesaythatZhasamatrixnormaldistributionwithparameters,andifZisanmrrandommatrixhavingmomentgeneratingfunctiongivenbyMZ(t)=exptr(Tt)+1 2tr(tTt)andwewriteZNm;r(;;).InthiscasewehaveE(Z)=.Moreover,ifandarepositivedenitematricesthenZhasthefollowingdensityfunctionfZ(z)=1 (2)mr=2jjr=2jjm=2exp1 2tr1(z)1(z)T:Sincer(X)=k,itfollowsthatXTQ1Xandhenceisap.d.matrix.Thus, 2tr1yTQ1yT1)jQjd 2
PAGE 53
2 2exp(1 2tr1yTQ1yT1)jjd 2IQ1 2X(XTQ1X)1XTQ1 2Q1 2y andsinceIQ1 2X(XTQ1X)1XTQ1 2isanidempotentmatrix,itimpliesthatyTQ1yT1isapositivesemi-denitematrix.NowweprovethatyTQ1yT1isap.d.matrixbyshowingthatjyTQ1yT1j6=0(withprobabilityone).Letbethen(d+k)augmentedmatrix(X:y).Then,TQ1=264XTyT375Q1Xy=264XTQ1XXTQ1yyTQ1XyTQ1y375:Therefore, Sincer(X)=k,weknowthatXTQ1Xisap.d.matrixandhencejj>0.Alsosincend+k,thed+kcolumnsofarelinearlyindependentwithprobabilityonebecausetheprobabilityofndimensionalrandomcolumnvectorsofylyinginanylinearsubspaceofRnwithdimensionn1iszero(withrespecttotheLebesguemeasureonRdn). 53
PAGE 54
4{8 )itfollowsthatjyTQ1yT1j>0.Tointegratetheexpressionin( 4{6 )withrespectto,weusethefollowingdenitionofInverseWishartdistribution. 29 ,p.85]Letbeappp.d.matrix.ThenforsomempthepprandommatrixWissaidtohaveaInverseWishartdistributionwithparametersmandifthep.d.fofW(withrespecttoLebesguemeasureonRp(p+1) 2,restrictedtothesetwhereW>0)isgivenbyf(W;m;)=jWjm+p+1 2exptr(1W1) 2 4jjm 2(m+1i))andwewriteWIWp(m;).Henceifnd+ki.e.,nkdbytheabovedenitionofInverseWishartdistribution,wehave =2d(nk) 2d(d1) 4Qdi=1(1 2(nk+1i)) (2)d(nk) 2yTQ1yT1nk 2(nk+1i)) 4yTQ1yT1nk 4{8 )weget 4jTQ1jd 4jjdnYj=1h(qj): 54
PAGE 55
4jjd;which,ofcourse,isanitenumber.So,wehaveprovedthatintheparticularcasewhenn=d+k,theposteriordistributionisproperwithprobabilityone.Then,anapplicationofLemma2ofMarchevandHobert[ 28 ]showsthatfor-almostallytheposteriorisproperfornd+k.Wewillnowshowthattheposteriordistributionisimproperwhenn
PAGE 56
2:uii>0;8i=1;:::;dg:NoticethatthecolumnsofLformabasisofRd.Sothereexistsconstantsb1;b2;:::;bdwithbi6=0forsomeisuchthatx0=Pdi=1bili.Supposei0=minfi2f1;2;:::;dg:bi6=0g.NowconsiderthetransformationL=(l1;l2;:::;ld)!O=(o1;o2;:::;od)whereoi=li8i6=i0andoi0=x0i.e.,O=LA,[email protected]:ThentheJacobianofthetransformationisjjAjdj=jbi0jd[ 29 ,p.36].Notethatli0=1 2dXi=1lTilidYi=1lnkiiidL=jbi0jd 2Xi6=i0oTioi+1 2Xii01+b2i 2:vii>0;8i6=i0andbi0vi0i0>0g:ByFubini'stheoremwecanrearrangetheorderofintegration.Noticethatoi0doesnotappearintheexponentialtermandtheonlyterminvolvingoi0i0intheaboveintegralis 56
PAGE 57
AswementionedintheintroductionthatFernandezandSteel[ 12 ]gaveaproofofproprietyoftheposteriordensity(;jy).Abyproductofouralternativeproofisamethodofexactsamplingfrom(;jy)intheparticularcasewhennisd+k.Wedescribethemethodnow.Let(q;;jy)bethejointposteriordensityof(q;;)givenby(q;;jy)=f(y;qj;)(;) 4{5 )itiseasytoseethat 4{10 )weknowthat(qjy)=Qni=1h(qi).So,inthiscaseanexactdrawfrom(;jy)canbemadeusingthefollowingthreesteps: (i) Drawq1;q2;:::qnindependentlywhereqih(qi). (ii) DrawIWd[nk;(yTQ1yyTQ1X(XTQ1X)1XTQ1y)1] (iii) DrawTNd;k(yTQ1X(XTQ1X)1;;(XTQ1X)1) 57
PAGE 58
36 ])havefunctionsforgeneratingrandommatricesfromtheInverseWishartdistribution.OnewaytogenerateZNm;r(;;)istorstindependentlydrawZiindNr(i;)whereTiistheithrowoffori=1;:::;m.ThentakeZ=1 2266666664ZT1ZT2...ZTm377777775: 4{3 ).WerstdeveloptheDAalgorithminSection 4.3.1 usingthelatentdataq=(q1;q2;;qn)andthejointposteriordensityof(q;;jy).WethenderivetheHaarPX-DAalgorithminsection 4.3.2 .Inthespecialcase,whentheobservations,yi's,areassumedtobefromamultivariateStudent'stlocation-scalemodel,vanDykandMeng[ 51 ]developedthemarginalaugmentationalgorithm,whichisamodiedversionofthestandarddataaugmentationalgorithm,forthedensity( 4{3 ).HobertandMarchev[ 17 ]haveshownthatwhenthegroupstructureexploitedbyLiuandWu[ 27 ]exists,marginalaugmentationalgorithm(withleft-Haarmeasurefortheworkingprior)isexactlysameasthe LiuandWu 's[ 1999 ]HaarPX-DAalgorithm.Insection 4.3.2 weshowthatasimilargroupstructurecanbeestablishedforanalyzingtheposteriordensityin( 4{3 )andsomarginalaugmentationalgorithmisthesameastheHaarPX-DAalgorithminourcase.
PAGE 59
4.2 ,weknowthatconditionallyj;q;yfollowsaMatrixNormaldistributionandtheconditionaldistributionofjq;yisanInverseWishartdistribution.Conditionalon(;;y)qi'sareindependentwithqij;;yindh(qi)qd 2:
PAGE 60
HobertandMarchev 's[ 2008 ]Section4.3.FromSection 4.2 ,weknowthat(qjy)/jXTQ1Xjd q:d q:nk g=c3Z10nYi=11 :Inordertoshowthatm(q)<1,supposex1 ;x>0andconsiderthestandardnoninformativeprior,1=,forthescaleparameter,.ThenthecorrespondingposteriordistributionispropersinceZ101 d =1 )=1
PAGE 61
28 ]itfollowsthatR10Qni=11 <1forallx1;x2;:::;xn>0andn1sinceQni=11 .Hence,itfollowsthatm(q)<1.ConsiderthefollowingunivariatedensityonR+eq(g)=gn1 (i) Drawq(qj0;0;y) (ii) Drawgeq(g)andsetq0=gq Draw;(;jq0;y)TheMarkovtransitiondensityofthePX-DAalgorithmcanbewrittenas~k(;j0;0)=ZRn+ZRn+(;jq0;y)R(q;dq0)(qj0;0;y)dq;whereR(q;dq0)istheMarkovtransitiondensityinducedbythestep2,thattakesq!q0=gq.InthespecialcasewhenwehavemultivariateStudent'stdata,itiseasytoseethatthedensityeq(g)isGamma(n 61
PAGE 62
jPj:SincePisap.d.matrix,jPj>0.Similarly,sincePxxTisap.s.d.matrix,jPxxTj0.ThenfromtheaboveidentityitfollowsthatxTP1x1. ThefollowinglemmaestablishesthedriftinequalityfortheDAalgorithm. +d2V(0;0)+n+dk +d2:
PAGE 63
Tocalculatetheaboveconditionalexpectationsweneedthecorrespondingconditionaldistributions.Therequiredconditionaldistributions,asderivedintheprevioussections,arethefollowingTj;q;yNd;k(T;;)jq;yIWdnk;yTQ1yT11andqij;;yind+d 2i=1;2;:::;n:Startingwiththeinnermostexpectation,wehaveE(V(;)j;q;y)=EnXi=1yTi1yi2nXi=1yTi1Txi+nXi=1xTi1Txi;q;y=nXi=1yTi1yi2nXi=1EyTi1Txi;q;y+nXi=1ExTi1Txi;q;y:Tocalculatetheaboveexpectationsweusethefollowingpropertyofmatrixnormaldistribution.LetZNm;r(;;).IfCandDaretwomatricesofdimensionnmandrsrespectivelythenCZDNn;s(CD;CC0;D0D)[ 2 ,chap17].So,1 2T;q;yNd;k(1 2T;I;).Notethat,1T=1 2TT(1 2T.Therefore,1T;q;yWk(d;;1T),wherebyVWp(m;;)wemeanthatVhaspdimensionalnoncentralWishartdistributionwithmdegreesoffreedom,covariancematrixandwithnoncentralityparameter[ 2 ,chap17].Inthiscase,E(V)=m+.If=0,wesaythatVhasacentralWishartdistributionandwewriteVWp(m;).So,wehave 63
PAGE 64
4{12 ),weusethefactthatifXIWp(m;),thenX1Wp(m;)andE(X1)=m[ 29 ,p.85].Thus,E[E(V(;)j;q;y)jq;y]=nXi=1(yiTxi)TE1jq;y(yiTxi)+dnXi=1xTixi=(nk)nXi=1(yiTxi)T(yTQ1yT1)1(yiTxi)+dnXi=1xTixi:
PAGE 65
1 yields (yiTxi)T(yTQ1yT1)1(yiTxi)1 1 gives Therefore,wegetE(V(;)jq;y)(n+dk)nXi=11 2i=1;2;:::;n:So,usingthefactthatifwGamma(a;b)thenE(1 a1,nally,wehaveE(V(;)j0;0)n+dk +d2nXi=1(yi0Txi)T(0)1(yi0Txi)+;whichprovesthelemma. Thefollowinglemmaestablishesanassociatedminorizationcondition. 65
PAGE 66
llog1+l and(a;b;x)denotestheGamma(a,b)densityevaluatedatthepointx. Proof. 2nYi=1inf(0;0)2S+d 2nYi=1inf(0;0)2Si+d 2:Thenby Hobert 's[ 2001 ]Lemma1,itstraightforwardlyfollowsthat(qj0;0;y)nYi=1g(qi)8(0;0)2S:
PAGE 67
+d2isstrictlylessthan1.Westatethisinthefollowingtheorem. +d2<1i.e.,n<+k2. 's[ 2008 ]Proposition6showsthattheHaarPX-DAalgorithmisatleastasecient(ineciencyordering)astheDAalgorithm.UsingsimilarargumentsasinCorollary1,wecanshowthatgeometricergodicityoftheDAalgorithmimpliesthatoftheHaarPX-DAalgorithm.Hencewehavethefollowingcorollary. 2 matcheswith MarchevandHobert 's[ 2004 ]Theorem10inthecasewhenk=1.WealsocanprovethegeometricergodicityoftheHaarPX-DAalgorithmbydirectlyestablishingadriftandminorizationconditionforit.WeactuallycanusethesamedriftfunctionV(;)=Pni=1(yiTxi)T1(yiTxi)toestablishadriftconditionfortheHaarPX-DAalgorithm. (n2)(+d2)V(0;0)+n(n+dk) (n2)(+d2):
PAGE 68
4.3.2 weknowthatq0=gqwheregjqGamma(n 4{13 )andthenstraightforwardalgebrashowsthatE(V(;)jq0;y)=(nk) gnXi=1xTixi:So,E(V(;)jq;y)=(nk)nXi=1(yiTxi)T(yTQ1yT1)1(yiTxi)+dnXi=1xTixiE1 4{14 )and( 4{15 )itthenfollowsthatE(V(;)jq;y)(n+dk) 2i=1;2;:::;n;wehaveE(V(;)j0;0)(n+dk) +d2nXi=1Xj6=i+(0Txiyi)T(0)1(0Txiyi)
PAGE 69
OurminorizationconditionfortheDAalgorithmstraightforwardlygeneralizestoaminorizationconditionforthePX-DAalgorithm. 3 .Together,Lemmas 4 and 5 provethefollowingtheorem. (n2)(+d2)<1.AsacorollaryofTheorem 4 (seeCorollary 2 )weknowthatthePX-DAalgorithmisgeometricallyergodicifn+dk +d2<1.AtrstitmightappearthatTheorem 5 isabetterresultthanCorollary 2 .But,wenowshowthatitcanneverhappenthatbothofthefollowinginequalitiesholdtogether +d21;(n+dk)(+d)(n1) (n2)(+d2)<1:(4{16)Notethat(n+dk)(+d)(n1) (n2)(+d2)
PAGE 70
4{17 )holdsifandonlyifn+k2andn<1+2 4 andTheorem 5 areupshotsoftheparticulardriftfunctionV(;)andtheinequalities( 4{14 )and( 4{15 )thatweusedtoprovethedriftconditions.Inouropinion,tomakesubstantialimprovementofTheorem 4 andTheorem 5 ,weeitherhavetoconsideradierentdriftfunctionorresorttoaaltogetherdierenttechniqueofprovinggeometricergodicityotherthanestablishingdriftandminorizationcondition.Eitherofthesetwowouldrequireustostartfromscratch. 70
PAGE 71
(h;g) 2.Iftheresultingnormedspaceiscomplete,itiscalledaHilbertspace.LetHbeaHilbertspaceoverCandT:H!Hbealineartransformation,calledalinearoperator.TheoperatorTissaidtobeboundedifthereexistsanM>0suchthatkThkMkhkforallh2H,wherethenorm,kk,isasdenedabove.LetB(H)bethecollectionofalllinear,boundedoperatorsfromHintoH.ForT2B(H),denetheoperatornormofTbykTk=supkThk khk:h2H;h6=0:ItiseasytoseethatkTk=supfkThk:h2H;khk1g=supfkThk:h2H;khk=1g:
PAGE 72
47 ,p.311].Somepropertiesofadjointoperatorsarelistedbelow:kTk=kTkT=TkTTk=kTk2andifSalsobelongstoB(H)then(ST)=TS:Wenowcandenedierenttypesofoperators.AnoperatorT2B(H)issaidtobe 4 weknowthat(h;Th)= (Th;h). 72
PAGE 73
47 ,p.310]. Proof. SupposeT2B(H).TheoperatorTisinvertibleifthereisanS2B(H)suchthatTS=I=ST.TheoperatorSiscalledtheinverseofTandwewriteS=T1.SupposeR(T)denotestherangeofT,i.e.,R(T)=fTh:h2Hg.ThenTisinvertibleifandonlyifR(T)=HandTisone-to-one.Thespectrum,(T),oftheoperatorT2B(H)isdenedasfollows(T)=f:TIisnotinvertibleg:Thus2(T)ifandonlyifatleastoneofthefollowingtwostatementsistrue: (i) TherangeofTIisnotallofHi.e.,TIisnotonto. (ii) TheoperatorTIisnotone-to-one. 73
PAGE 74
9 ,p.83]. n: Proof.
PAGE 75
4 tellsusthat(T)isaboundedsetforanyboundedlinearoperatorTbecause(T)f2C:jjkTkg: Proof. ItcanalsobeprovedthatforT2B(H);H6=f0g,thespectrumofT,(T),isnotempty[ 47 ,p.253].Thespectralradius,rT,ofTisdenedasrT=supfjj:2(T)g:
PAGE 76
PAGE 77
PAGE 78
PAGE 79
13 isofcourseaboundedoperatoranditcanbeprovedthatkf(T)ksupfjf()j:2(T)g.Wealsohavethefollowingtheoremwhichdeterminesthespectrumoftheoperatorf(T). 37 ]givesanice,concisedescriptionofcompactoperators.ForacompactoperatorT,thespectrum,(T),isatmostcountable,andhasatmostonelimitpoint,namely,0.Also,anynon-zerovalueinthespectrumisnecessarilyaneigenvalueofT.Westatetheseresultsinthefollowingtheorem([ 8 ]p.214,[ 9 ]chapter5). 79
PAGE 80
47 ]p.112,[ 8 ]p.267).Let(;B();)beameasurespace.Suppose,t:!CbesuchthatZZjt(x;y)j2d(x)d(y)<1:ThentheassociatedHilbert-SchmidtintegraloperatoristheoperatorT:L2(;C)!L2(;C)givenby(Tf)(x)=Zt(x;y)f(y)d(y):
PAGE 81
PAGE 82
MeynandTweedie 's[ 1993 ]Proposition17.4.1itfollowsthatforHarrisergodicMarkovchainstheonlyfunctionsinL2()satisfyingtheequationlPf=0are-almosteverywhereconstant.SinceL20()doesnotincludefunctionswhichare-almosteverywhereconstant(non-zero)andsinceweconsiderHarrisergodicMarkovchains,theoperatorlPisone-to-oneinourcase.But,lPmightnotbeanontooperatorandsoitmightnotbeinvertible.FromthedenitionofspectrumofanoperatorweknowthatlPisinvertiblei162(P).Since(P)isaclosedsetitfollowsthat162(P)ifandonlyifsup(P)<1.HencetheLaplacianoperator,lP,isinvertibleisup(P)<1.RecallfromChapter2thatwesayaCLTholdsfor 13 ]thataCLTholdsforanyfunctionflyinginR(lP),therangeoflP,andthecorrespondingasymptoticvarianceintheCLTisgivenbyv(f;P)=kgk2kPgk2;wheref=lPgg2L20().So,iflPisinvertible,thenCLTholdsforeveryfunctionf2L20().Also,fromTheorem 7 weknowthatifkPk<1,thenlPisalwaysinvertible.Hence,wehavethefollowingproposition.
PAGE 83
32 ].From[ 13 ]itthenfollowsthatCLTholdsforallf2D(l1P)andbylemma3.2ofMiraandGeyer[ 32 ]wehavev(f;P)=(f;[2l1PI]f);8f2D(l1P):InChapter2,wedenedtheeciencyorderingdueto[ 32 ].TheproblemwitheciencyorderingisthatevenwhenweconcludePEQ,itmighthappenthatv(f;P)=v(f;Q)foreveryf2L2()thatareofinterest.Inthischapter,wediscussconditionsthatguarantee 5{1 )implies( 5{2 ).But,( 5{2 )doesnotnecessarilyimply( 5{1 ).LetlPandlQbetheLaplacianoperatorscorrespondingtotheMarkovoperatorsPandQ.IfweassumethatbothlPandlQareinvertiblethenitfollowsthatD(l1P)=L20()=D(l1Q).Hence,wehavev(f;P)=(f;[2l1PI]f);8f2L20()andv(f;Q)=(f;[2l1QI]f);8f2L20(): 2I0 83
PAGE 84
2I0: 32 ]. 3 ,ifwereplacetheinequalitiesbystrictinequalitiesandcby1,wegetconditionsforstricteciency.FortherestofthissectionweassumethatwehavereversibleMarkovchains.WeknowthattheMarkovtransitionoperator,P,ofareversibleMarkovchainisself-adjoint.So,wecanusespectraltheorytoanalyzereversibleMarkovchains.Let 24 ]showthatifv(f;P)in( 5{3 )isnite,thenthereisaCLTforfwiththeasymptoticvariance,v(f;P)givenby( 5{3 ).Nowonwards,wedenoteEf;f()byEfP().RecallfromChapter2thatPissaidtobebetterthanQintheeciencyorderingwrittenPEQ,ifv(f;P)v(f;Q)foreveryf2L2().ItcanbeeasilyshownthatthedenitionofeciencyorderingisunchangedifthespaceL2()isreplacedwithL20().Itistemptingtoconcludefrom( 5{3 )andthespectraltheoremthatPEQiI+Q IQI+P IP.But,since(P)[1;1],thefunctionh(x)=1+x 84
PAGE 85
32 ]. 35 ],Tierney[ 50 ]).Also,Tierney[ 50 ]showsthatPeskunorderingimpliescovarianceordering.So,P1I,i.e.,lP=IPisapositiveoperator.Fromourdiscussionintheprevioussectionitthenfollowsthatthereexistsauniquesquarerootl1 2PoflP.KipnisandVaradhan[ 24 ]showthatv(f;P)<1ifandonlyiff2R(l1 2P).Notethat,thisdoesnotcontradicttheresultofGordinandLifsic[ 13 ]sinceR(lP)R(l1 2P).MiraandGeyer[ 32 ]provethatP1QifandonlyifPEQ.WegiveanalternativeproofforP1QimpliesPEQ.ThefollowingtheoremisfromBendatandSherman[ 3 ][seealso 32 ,Theorem4.1]. 3 ]mentionthatthefunctionh(x)=ax+b cx+dwithadbc>0satisestheconditionofTheorem 16 eitherinx>d corx
PAGE 86
cx+d,wehaveadbc=2(>0)andd c=1 16 .Since,weareassumingthatP1Qi.e.,QP,byTheorem 16 wehaveh(Q)h(P)for2(0;1):For2(0;1),h(x)isalsoaboundedBorelfunctionon[-1,1].Sowecanusethespectraltheoremtodenetheoperatorh(P).Byspectraltheorem,wehave(h(P)f;f)=Z(P)h()EfP(d):Let+(P)=(P)T[0;1]and(P)=(P)T[1;0)thenwecanwriteZ(P)h()EfP(d)=Z+(P)h()EfP(d)+Z(P)h()EfP(d):Notethat,d dh()=2
PAGE 87
Proof. 16 wehaveQP)I+Q IQI+P IP:Since(P)<1,thefunctionh(x)iscontinuouson(P).ByTheorem 14 ,weknowthath(P)isaboundedoperator.SinceweconsiderthescalareldtobeR,byTheorem 14 ,wealsoknowthath(P)isself-adjoint.Letg(x)=x1 16 ongtoshowI+Q IQI+P IP)QP:Hence,wegetQP,I+Q IQI+P IP:Thefunctionh(x)=1+x IP.Then,by( 5{3 )wehaveI+Q IQI+P IP,PEQ:Hence,P1Q,QP,I+Q IQI+P IP,PEQ:
PAGE 88
2 weknowthatifkPk<1,thenCLTholdsforeverysquareintegrablefunctionf.ItiseasytoseethatifPisacompactoperatorthenkPk<1.So,inordertoestablishCLTforaMarkovchainwecanshowthatthecorrespondingMarkovtransitionoperator,P,iscompact.InSection 5.1 ,wementionedthatanyHilbertSchmidtoperatoriscompact.RecallfromChapter2thatifareversibleMarkovchainisgeometricallyergodic,thentheCLTholdsforeverysquareintegrablefunctionf[ 41 ].AlsowementionedbeforethatareversibleMarkovchain,P,isgeometricallyergodicifandonlyifkPk<1[ 41 44 ].SchervishandCarlin[ 48 ]andLiuetal.[ 26 ])haveprovedgeometricergodicityofcertainMarkovchainsbyestablishingthatthecorrespondingMarkovoperator,P,isHilbertSchmidt. 88
PAGE 89
Herewestate ChenandShao 's[ 2000 ]necessaryandsucientconditionsforc1(y)<1aswellasasimplemethodforcheckingtheseconditions.LetXdenotethenpmatrixwhoseithrowisxTiandletWdenoteannpmatrixwhoseithrowiswTi,wherewi=8><>:xiifyi=0xiifyi=1: 6 ]Thefunctionc1(y)isniteifandonlyif (i) thedesignmatrixXhasfullcolumnrank,and (ii) thereexistsavectora=(a1;:::;an)TwithstrictlypositivecomponentssuchthatWTa=0.AssumingthatXhasfullcolumnrank,thesecondconditionofProposition 5 canbestraightforwardlycheckedwithasimplelinearprogramimplementableintheRprogramminglanguage[ 18 ]usingthe\simplex"functionfromthe\boot"library.Let1andJdenoteacolumnvectorandamatrixof1s,respectively.Thelinearprogramcallsformaximizing1Tasubjectto 5 issatisedandc1(y)<1.Moreover,itisstraightforwardtoshowthatifacontainsoneormorezeros,thentheredoesnotexistanawithallpositiveelementssuchthatWTa=0,soc1(y)=1. 89
PAGE 90
[1] Albert,J.H.andChib,S.(1993),\Bayesiananalysisofbinaryandpolychotomousresponsedata,"JournaloftheAmericanStatisticalAssociation,88,669{679. [2] Arnold,S.F.(1981),TheTheoryofLinearModelsandMultivariateAnalysis,Wiley,NewYork. [3] Bendat,J.andSherman,S.(1955),\Monotoneandconvexoperatorfunctions,"TransactionsoftheAmericanMathematicalSociety,79,58{71. [4] Casella,G.andGeorge,E.(1992),\ExplainingtheGibbssampler,"TheAmericanStatistician,46,167{174. [5] Chan,K.S.andGeyer,C.J.(1994),\Discussionof\MarkovchainsforExploringPosteriorDistributions","TheAnnalsofStatistics,22,1747{1757. [6] Chen,M.-H.andShao,Q.-M.(2000),\Proprietyofposteriordistributionfordichotomousquantalresponsemodels,"ProceedingsoftheAmericanMathemati-calSociety,129,293{302. [7] Chib,S.andGreenberg,E.(1995),\UnderstandingtheMetropolis-Hastingsalgorithm,"TheAmericanStatistician,49,327{335. [8] Conway,J.B.(1990),ACourseinFunctionalAnalysis,Springer-Verlag,NewYork,2nded. [9] Devito,C.L.(1990),FunctionalAnalysisandLinearOperatorTheory,Addison-WesleyPublishingCompany. [10] Eaton,M.L.(1989),GroupInvarianceApplicationsinStatistics,Hayward,CaliforniaandAlexandria,Virginia:InstituteofMathematicalStatisticsandtheAmericanStatisticalAssociation. [11] Feller,W.(1968),AnIntroductiontoProbabilityTheoryanditsApplications,vol.I,NewYork:JohnWiley&Sons,3rded. [12] Fernandez,C.andSteel,M.F.J.(1999),\MultivariateStudent-tregressionmodels:Pitfallsandinference,"Biometrika,86,153{167. [13] Gordin,M.I.andLifsic,B.A.(1978),\ThecentrallimittheoremforstationaryMarkovprocesses,"SovietMathematics.Doklady,19,392{394. [14] Hobert,J.P.(2001),\Discussionof\Theartofdataaugmentation"byD.A.vanDykandX.-L.Meng,"JouranlofComputationalandGraphicalStatistics,10,59{68. [15] Hobert,J.P.andGeyer,C.J.(1998),\GeometricErgodicityofGibbsandBlockGibbsSamplersforaHierarchicalRandomEectsModel,"JournalofMultivariateAnalysis,67,414{430. 90
PAGE 91
Hobert,J.P.,Jones,G.L.,Presnell,B.,andRosenthal,J.S.(2002),\OntheapplicabilityofregenerativesimulationinMarkovchainMonteCarlo,"Biometrika,89,731{743. [17] Hobert,J.P.andMarchev,D.(2008),\Atheoreticalcomparisonofthedataaugmentation,marginalaugmentationandPX-DAalgorithms,"TheAnnalsofStatistics,36,532{554. [18] Ihaka,R.andGentleman,R.(1996),\R:ALanguageforDataAnalysisandGraphics,"JournalofComputationalandGraphicalStatistics,5,299{314. [19] Johnson,N.L.andKotz,S.(1970),ContinuousUnivariateDistributions-1,JohnWiley&Sons. [20] Jones,G.L.(2004),\OntheMarkovchaincentrallimittheorem,"ProbabilitySurveys,1,299{320. [21] Jones,G.L.,Haran,M.,Cao,B.S.,andNeath,R.(2006),\Fixed-widthoutputanalysisforMarkovchainMonteCarlo,"JournaloftheAmericanStatisticalAssocia-tion,101,1537{1547. [22] Jones,G.L.andHobert,J.P.(2001),\HonestexplorationofintractableprobabilitydistributionsviaMarkovchainMonteCarlo,"StatisticalScience,16,312{34. [23] |(2004),\Sucientburn-inforGibbssamplersforahierarchicalrandomeectsmodel,"TheAnnalsofStatistics,32,784{817. [24] Kipnis,C.andVaradhan,S.R.S.(1986),\CentrallimittheoremforadditivefunctionalsofreversibleMarkovprocessesandapplicationstosimpleexclusions,"CommunicationsinMathematicalPhysics,104,1{19. [25] Liu,J.S.,Wong,W.H.,andKong,A.(1994),\CovarianceStructureoftheGibbsSamplerwithApplicationstoComparisonsofEstimatorsandAugmentationSchemes,"Biometrika,81,27{40. [26] |(1995),\CovarianceStructureandConvergenceRateoftheGibbsSamplerwithVariousScans,"JournaloftheRoyalStatisticalSociety,SeriesB,57,157{169. [27] Liu,J.S.andWu,Y.N.(1999),\ParameterExpansionforDataAugmentation,"JournaloftheAmericanStatisticalAssociation,94,1264{1274. [28] Marchev,D.andHobert,J.P.(2004),\GeometricergodicityofvanDykandMeng'salgorithmforthemultivariateStudent'stmodel,"JournaloftheAmericanStatisticalAssociation,99,228{238. [29] Mardia,K.,Kent,J.,andBibby,J.(1979),MultivariateAnalysis,London:Academicpress. 91
PAGE 92
Meyn,S.P.andTweedie,R.L.(1993),MarkovChainsandStochasticStability,London:SpringerVerlag. [31] |(1994),\ComputableboundsforgeometricconvergenceratesofMarkovchains,"TheAnnalsofAppliedProbability,4,981{1011. [32] Mira,A.andGeyer,C.J.(1999),\OrderingMonteCarloMarkovchains,"Tech.Rep.No.632,SchoolofStatistics,UniversityofMinnesota. [33] Mykland,P.,Tierney,L.,andYu,B.(1995),\RegenerationinMarkovchainSamplers,"JournaloftheAmericanStatisticalAssociation,90,233{41. [34] Nummelin,E.(1984),GeneralIrreducibleMarkovChainsandNon-negativeOpera-tors,London:CambridgeUniversityPress. [35] Peskun,P.H.(1973),\OptimumMonteCarlosamplingusingMarkovchains,"Biometrika,60,607{612. [36] RDevelopmentCoreTeam(2006),R:ALanguageandEnvironmentforStatisticalComputing,RFoundationforStatisticalComputing,Vienna,Austria. [37] Retherford,J.R.(1993),HilbertSpace:CompactOperatorsandtheTracetheorem,CambridgeUniversityPress. [38] Robert,C.andCasella,G.(2004),MonteCarloStatisticalMethods,Springer,NewYork. [39] Robert,C.P.(1995),\Simulationoftruncatednormalvariables,"StatisticsandComputing,5,121{125. [40] Roberts,G.andTweedie,R.(1999),\BoundsonregenerationtimesandconvergenceratesforMarkovchains,"StochasticProcessesandtheirApplications,80,221{229.Corrigendum(2001)91:337{338. [41] Roberts,G.O.andRosenthal,J.S.(1997),\GeometricergodicityandhybridMarkovchains,"ElectronicCommunicationsinProbability,2,13{25. [42] |(2001),\MarkovChainsandde-initializingprocesses,"ScandinavianJournalofStatistics,28,489{504. [43] |(2004),\GeneralStateSpaceMarkovChainsandMCMCAlgorithms,"ProbabilitySurveys,1,20{71. [44] Roberts,G.O.andTweedie,R.L.(2001),\GeometricL2andL1convergenceareequivalentforreversibleMarkovchains,"JournalofAppliedProbability,38A,37{41. [45] Rosenthal,J.S.(1995),\MinorizationConditionsandConvergenceRatesforMarkovChainMonteCarlo,"JournaloftheAmericanStatisticalAssociation,90,558{566. 92
PAGE 93
Roy,V.andHobert,J.P.(2007),\ConvergenceratesandasymptoticstandarderrorsforMCMCalgorithmsforBayesianprobitregression,"JournaloftheRoyalStatisticalSociety,SeriesB,69,607{623. [47] Rudin,W.(1991),FunctionalAnalysis,McGraw-Hill,2nded. [48] Schervish,M.J.andCarlin,B.P.(1992),\OntheConvergenceofSuccessiveSubstitutionSampling,"JouranlofComputationalandGraphicalStatistics,1,111{127. [49] Tierney,L.(1994),\Markovchainsforexploringposteriordistributions(withdiscussion),"TheAnnalsofStatistics,22,1701{1762. [50] |(1998),\ANoteonMetropolis-HastingsKernelsforGeneralStateSpaces,"TheAnnalsofAppliedProbability,8,1{9. [51] vanDyk,D.A.andMeng,X.-L.(2001),\TheArtofDataAugmentation(withDiscussion),"JournalofComputationalandGraphicalStatistics,10,1{50. 93
PAGE 94
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9686111807823181, "perplexity": 7991.063269804051}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812871.2/warc/CC-MAIN-20180220010854-20180220030854-00762.warc.gz"}
|
https://zbmath.org/authors/?q=ai%3Areich.holger
|
×
# zbMATH — the first resource for mathematics
## Reich, Holger
Compute Distance To:
Author ID: reich.holger Published as: Reich, Holger External Links: MGP
Documents Indexed: 21 Publications since 1999, including 1 Book
all top 5
#### Co-Authors
3 single-authored 10 Lück, Wolfgang 8 Bartels, Arthur C. 5 Varisco, Marco 2 Farrell, F. Thomas 2 Jones, Lowell Edwin 2 Rognes, John 1 Davis, James Frederic 1 Hambleton, Ian 1 Pedersen, Erik Kjær 1 Quinn, Frank S. 1 Ranicki, Andrew A. 1 Rüping, Henrik 1 Schick, Thomas
all top 5
#### Serials
2 Advances in Mathematics 2 $$K$$-Theory 2 Journal of Topology 1 Publications Mathématiques 1 Inventiones Mathematicae 1 Journal für die Reine und Angewandte Mathematik 1 Proceedings of the London Mathematical Society. Third Series 1 Topology 1 Journal of the American Mathematical Society 1 Geometry & Topology 1 Algebraic & Geometric Topology 1 Oberwolfach Reports
all top 5
#### Fields
17 $$K$$-theory (19-XX) 7 Algebraic topology (55-XX) 5 Group theory and generalizations (20-XX) 4 Functional analysis (46-XX) 4 Manifolds and cell complexes (57-XX) 3 Category theory; homological algebra (18-XX) 2 Associative rings and algebras (16-XX) 1 General and overarching topics; collections (00-XX) 1 Dynamical systems and ergodic theory (37-XX) 1 Differential geometry (53-XX) 1 Global analysis, analysis on manifolds (58-XX)
#### Citations contained in zbMATH Open
18 Publications have been cited 241 times in 121 Documents Cited by Year
The Baum-Connes and the Farrell-Jones conjectures in $$K$$- and $$L$$-theory. Zbl 1120.19001
Lück, Wolfgang; Reich, Holger
2005
The $$K$$-theoretic Farrell-Jones conjecture for hyperbolic groups. Zbl 1143.19003
Bartels, Arthur; Lück, Wolfgang; Reich, Holger
2008
Coefficients for the Farrell-Jones conjecture. Zbl 1161.19003
Bartels, Arthur; Reich, Holger
2007
On the isomorphism conjecture in algebraic $$K$$-theory. Zbl 1036.19003
Bartels, Arthur; Farrell, Tom; Jones, Lowell; Reich, Holger
2004
On the Farrell-Jones conjecture and its applications. Zbl 1141.19002
Bartels, Arthur; Lück, Wolfgang; Reich, Holger
2008
Equivariant covers for hyperbolic groups. Zbl 1185.20045
Bartels, Arthur; Lück, Wolfgang; Reich, Holger
2008
$$K$$- and $$L$$-theory of group rings over $$\mathrm{GL}_n(\mathbb Z)$$. Zbl 1300.19001
Bartels, Arthur; Lück, Wolfgang; Reich, Holger; Rüping, Henrik
2014
On the Farrell-Jones conjecture for higher algebraic $$K$$-theory. Zbl 1073.19002
Bartels, Arthur; Reich, Holger
2005
Algebraic $$K$$-theory over the infinite dihedral group: a controlled topology approach. Zbl 1227.19004
Davis, James F.; Quinn, Frank; Reich, Holger
2011
Detecting $$K$$-theory by cyclic homology. Zbl 1116.19002
Lück, Wolfgang; Reich, Holger
2006
Commuting homotopy limits and smash products. Zbl 1053.55004
Lück, Wolfgang; Reich, Holger; Varisco, Marco
2003
On the K- and L-theory of the algebra of operators affiliated to a finite von Neumann algebra. Zbl 0998.19005
Reich, Holger
2001
On the Adams isomorphism for equivariant orthogonal spectra. Zbl 1350.55015
Reich, Holger; Varisco, Marco
2016
Group von Neumann algebras and related algebras. Zbl 1015.55003
Reich, Holger
1999
Novikov-Shubin invariants for arbitrary group actions and their positivity. Zbl 0939.55011
Lück, Wolfgang; Reich, Holger; Schick, Thomas
1999
Algebraic $$K$$-theory, assembly maps, controlled algebra, and trace methods. Zbl 1409.19001
Reich, Holger; Varisco, Marco
2018
Algebraic K-theory of group rings and the cyclotomic trace map. Zbl 1357.19002
Lück, Wolfgang; Reich, Holger; Rognes, John; Varisco, Marco
2017
A foliated sequeezing theorem for geometric modules. Zbl 1049.57014
Bartels, Arthur; Farrell, Tom; Jones, Lowell; Reich, Holger
2003
Algebraic $$K$$-theory, assembly maps, controlled algebra, and trace methods. Zbl 1409.19001
Reich, Holger; Varisco, Marco
2018
Algebraic K-theory of group rings and the cyclotomic trace map. Zbl 1357.19002
Lück, Wolfgang; Reich, Holger; Rognes, John; Varisco, Marco
2017
On the Adams isomorphism for equivariant orthogonal spectra. Zbl 1350.55015
Reich, Holger; Varisco, Marco
2016
$$K$$- and $$L$$-theory of group rings over $$\mathrm{GL}_n(\mathbb Z)$$. Zbl 1300.19001
Bartels, Arthur; Lück, Wolfgang; Reich, Holger; Rüping, Henrik
2014
Algebraic $$K$$-theory over the infinite dihedral group: a controlled topology approach. Zbl 1227.19004
Davis, James F.; Quinn, Frank; Reich, Holger
2011
The $$K$$-theoretic Farrell-Jones conjecture for hyperbolic groups. Zbl 1143.19003
Bartels, Arthur; Lück, Wolfgang; Reich, Holger
2008
On the Farrell-Jones conjecture and its applications. Zbl 1141.19002
Bartels, Arthur; Lück, Wolfgang; Reich, Holger
2008
Equivariant covers for hyperbolic groups. Zbl 1185.20045
Bartels, Arthur; Lück, Wolfgang; Reich, Holger
2008
Coefficients for the Farrell-Jones conjecture. Zbl 1161.19003
Bartels, Arthur; Reich, Holger
2007
Detecting $$K$$-theory by cyclic homology. Zbl 1116.19002
Lück, Wolfgang; Reich, Holger
2006
The Baum-Connes and the Farrell-Jones conjectures in $$K$$- and $$L$$-theory. Zbl 1120.19001
Lück, Wolfgang; Reich, Holger
2005
On the Farrell-Jones conjecture for higher algebraic $$K$$-theory. Zbl 1073.19002
Bartels, Arthur; Reich, Holger
2005
On the isomorphism conjecture in algebraic $$K$$-theory. Zbl 1036.19003
Bartels, Arthur; Farrell, Tom; Jones, Lowell; Reich, Holger
2004
Commuting homotopy limits and smash products. Zbl 1053.55004
Lück, Wolfgang; Reich, Holger; Varisco, Marco
2003
A foliated sequeezing theorem for geometric modules. Zbl 1049.57014
Bartels, Arthur; Farrell, Tom; Jones, Lowell; Reich, Holger
2003
On the K- and L-theory of the algebra of operators affiliated to a finite von Neumann algebra. Zbl 0998.19005
Reich, Holger
2001
Group von Neumann algebras and related algebras. Zbl 1015.55003
Reich, Holger
1999
Novikov-Shubin invariants for arbitrary group actions and their positivity. Zbl 0939.55011
Lück, Wolfgang; Reich, Holger; Schick, Thomas
1999
all top 5
#### Cited by 118 Authors
18 Lück, Wolfgang 16 Bartels, Arthur C. 8 Reich, Holger 5 Farrell, F. Thomas 5 Juan-Pineda, Daniel 5 Winges, Christoph 5 Wu, Xiaolei 4 Bunke, Ulrich 4 Kasprowski, Daniel 4 Roushon, Sayed K. 4 Rüping, Henrik 4 Schick, Thomas 4 Yu, Guoliang 3 Cortiñas, Guillermo H. 3 Davis, James Frederic 3 Degrijse, Dieter 3 Engel, Alexander 3 Khan, Qayum 3 Kyed, David 3 Linnell, Peter Arnold 3 Rosenthal, David S. H. 3 Ullmann, Mark 3 Varisco, Marco 3 Wegner, Christian 2 Bestvina, Mladen 2 Connolly, Frank 2 Ellis, Eugenia 2 Gandini, Giovanni 2 Hirshberg, Ilan 2 Martínez-Pérez, Conchita 2 Petrosyan, Nansen 2 Prassidis, Stratos 2 Rognes, John 2 Sánchez Saldaña, Luis Jorge 2 Schütz, Dirk 2 Szabó, Gábor J. 2 Tartaglia, Gisela 2 Thom, Andreas Berthold 2 Winter, Wilhelm 2 Wu, Jianchao 2 Zacharias, Joachim 1 Alekseev, Vadim 1 Antieau, Benjamin 1 Antolín, Yago 1 Balmer, Paul 1 Banagl, Markus 1 Bartholdi, Laurent 1 Battikh, Naoufel 1 Blomer, Inga 1 Brück, Benjamin 1 Caputi, Luigi 1 Chung, Yeong Chyuan 1 Cisinski, Denis-Charles 1 Coulon, Rémi 1 Crowley, Diarmuid John 1 Deninger, Christopher 1 Dingoyan, Pascal 1 Dundas, Bjørn Ian 1 Elek, Gábor 1 Enkelmann, Nils-Edvin 1 Fiore, Thomas M. 1 Funke, Florian 1 Gąsior, Anna 1 Gepner, David 1 Gonçalves, Daciberg Lima 1 Gong, Sherry 1 Guaschi, John 1 Guentner, Erik Paul 1 Heller, Jeremiah 1 Hidber, Cristhian E. 1 Horbez, Camille 1 Jaikin-Zapirain, Andrei 1 Ji, Lizhen 1 Kammeyer, Holger 1 Kang, Hyosang 1 Kielak, Dawid 1 Knebusch, Anselm 1 Knopf, Svenja 1 Köhl, Ralf 1 Lafont, Jean-François 1 Land, Markus 1 Leichtnam, Eric 1 Lorman, Vitaly 1 Lutowski, Rafał 1 Macko, Tibor 1 Mahanta, Snigdhayan 1 Malkiewich, Cary 1 Matucci, Francesco 1 Metaftsis, Vassilis 1 Nicas, Andrew J. 1 Nikolaus, Thomas 1 Nucinkis, Brita E. A. 1 Peterson, Jesse 1 Piazza, Paolo 1 Pieper, Malte 1 Ponce Guajardo, Julia 1 Quinn, Frank S. 1 Ramos, Rafael 1 Ranicki, Andrew A. 1 Raum, Sven ...and 18 more Authors
all top 5
#### Cited in 45 Serials
8 Mathematische Annalen 7 Advances in Mathematics 7 Geometry & Topology 7 Algebraic & Geometric Topology 6 Topology and its Applications 6 Journal of Topology and Analysis 5 Proceedings of the American Mathematical Society 4 Inventiones Mathematicae 4 Journal of Pure and Applied Algebra 4 Groups, Geometry, and Dynamics 4 Journal of Topology 3 Annales de l’Institut Fourier 3 Journal of Functional Analysis 3 Journal für die Reine und Angewandte Mathematik 3 Journal of the American Mathematical Society 2 Communications in Mathematical Physics 2 Archiv der Mathematik 2 Journal of Algebra 2 Transactions of the American Mathematical Society 2 Forum Mathematicum 2 The New York Journal of Mathematics 2 Documenta Mathematica 2 Journal of Noncommutative Geometry 2 Journal of Homotopy and Related Structures 1 Russian Mathematical Surveys 1 Acta Mathematica 1 Compositio Mathematica 1 Geometriae Dedicata 1 Publications Mathématiques 1 Manuscripta Mathematica 1 Mathematische Nachrichten 1 Mathematische Zeitschrift 1 Proceedings of the London Mathematical Society. Third Series 1 Ergodic Theory and Dynamical Systems 1 $$K$$-Theory 1 Geometric and Functional Analysis. GAFA 1 Theory and Applications of Categories 1 Boletín de la Sociedad Matemática Mexicana. Third Series 1 Transformation Groups 1 Journal of Group Theory 1 Annals of Mathematics. Second Series 1 Communications of the Korean Mathematical Society 1 Journal of $$K$$-Theory 1 Annals of $$K$$-Theory 1 Transactions of the American Mathematical Society. Series B
all top 5
#### Cited in 24 Fields
77 $$K$$-theory (19-XX) 41 Group theory and generalizations (20-XX) 35 Manifolds and cell complexes (57-XX) 30 Category theory; homological algebra (18-XX) 20 Functional analysis (46-XX) 20 Algebraic topology (55-XX) 10 Associative rings and algebras (16-XX) 9 Topological groups, Lie groups (22-XX) 4 Dynamical systems and ergodic theory (37-XX) 4 Operator theory (47-XX) 4 General topology (54-XX) 4 Global analysis, analysis on manifolds (58-XX) 3 Algebraic geometry (14-XX) 3 Differential geometry (53-XX) 2 Commutative algebra (13-XX) 2 Several complex variables and analytic spaces (32-XX) 1 History and biography (01-XX) 1 Combinatorics (05-XX) 1 Order, lattices, ordered algebraic structures (06-XX) 1 Number theory (11-XX) 1 Linear and multilinear algebra; matrix theory (15-XX) 1 Nonassociative rings and algebras (17-XX) 1 Abstract harmonic analysis (43-XX) 1 Geometry (51-XX)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37552741169929504, "perplexity": 14656.700133700431}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991921.61/warc/CC-MAIN-20210516232554-20210517022554-00578.warc.gz"}
|
http://books.duhnnae.com/2017/jun2/149656890648-On-Ramsey-properties-of-classes-with-forbidden-trees-Jan-Foniok.php
|
# On Ramsey properties of classes with forbidden trees
On Ramsey properties of classes with forbidden trees - Download this document for free, or read online. Document in PDF available to download.
Download or read this book online for free in PDF: On Ramsey properties of classes with forbidden trees
Let $\F$ be a set of relational trees and let $\Forbh\F$ be the class of all structures that admit no homomorphism from any tree in $\F$; all this happens over a fixed finite relational signature $\sigma$. There is a natural way to expand $\Forbh\F$ by unary relations to an amalgamation class. This expanded class, enhanced with a linear ordering, has the Ramsey property.
Author: Jan Foniok
Source: https://archive.org/
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6322441697120667, "perplexity": 798.3921870295235}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823220.45/warc/CC-MAIN-20171019031425-20171019051425-00148.warc.gz"}
|
https://www.ideals.illinois.edu/handle/2142/50732
|
## Files in this item
FilesDescriptionFormat
application/pdf
Matthew_Harper.pdf (3MB)
(no description provided)PDF
## Description
Title: Benchmarking of off-road machinery operations with the use of geo-referenced data Author(s): Harper, Matthew Advisor(s): Hansen, Alan C. Department / Program: Agricultural & Biological Engr Discipline: Technical Systems Management Degree Granting Institution: University of Illinois at Urbana-Champaign Degree: M.S. Genre: Thesis Subject(s): Benchmarking of machine operations machinery productivity, efficiency, performance and cost big data analysis Abstract: Past research has revealed that farmers do not have the resources to evaluate the efficiency of their off-road machines and in order for them to do so, relevant data must be collected from those machines. The rise of modern on-board computer systems now allows researchers, farmers and off-road machinery manufacturers to collect data from off-road machines while they complete farm operations. The analysis of off-road machinery related data would allow for the benchmarking of machinery productivity, efficiency, performance and cost. Geo-referenced machinery performance data, provides an opportunity for the analysis of machinery performance in relation to unique spatial aspects of agricultural fields to determine their effects on the operation. The goal of this study was to identify, analyze and benchmark relevant geo-referenced machinery performance data based on selected productivity, efficiency, performance and cost indicators. The methodology was applied to corn planting operations on a farm in east-central Iowa involving a 24-row planter. The methodology was applied to two fields that were selected based on their differences in shape and slope (%). Field one featured a water way which split the field into two right triangles, while field two featured a high average slope (%). Field one, was found to be the more productive and efficient operation compared to field two. Actual field capacity, field efficiency, fuel efficiency and cost were 9.46 ha h-1, 56.3%, 4.27 L ha-1 and $6.54 ha-1 for field one, respectively, compared to field two’s 7.48 ha h-1, 44.5%, 5.01 L ha-1 and$7.84 ha-1. The key factor that contributed to the differences was that the tractor/planter was unproductive for 49% of the time it was in field two, compared to only 11.2% of the time in field one. The large amount of unproductive time reduced the productivity and efficiency of field two and increased the cost. A row-by-row analysis was conducted on the second operation to determine if field slope (%) was correlated with energy efficiency. The correlation analysis returned an R2 value of 0.0511, indicating no relationship existed. Engine power was found to vary significantly between certain rows. The average power in the rows was found to be 92 kW with a standard deviation of 33 kW. The average engine speed for fourteen of the seventeen rows was 1426 r min-1, compared to an average of 900 r min-1 for the remaining three rows. It was determined that the machine operator must have reduced the engine throttle when working in three of the rows. The benchmarking methodology was also used to determine the effects of the water way in field one on tractor turning performance. The presence of the water way caused the tractor to make a different shaped turn at the water way edge of the field. The average time for the tractor to complete a turn at the water way edge of the field was found to be 5.8 seconds greater than the opposite side of the field where no water way was present. The extra turning time required at the water way edge of the field increased the total turning time by 13.5%. Some assumptions were made concerning this field to predict field efficiency if the water way did not exist. Field efficiency was predicted to increase from 50.2% to 69.9%, if the water way was not present. . The benchmarking of individual machine operations conducted on a farm could be combined to benchmark the productivity, efficiency, performance and cost of all the machine operations conducted on a farm. This would empower farm managers to budget time and money more accurately for future machine operations by reviewing past benchmarking records. Farm mangers would also be able to evaluate each individual machine and operator on their farm to identify opportunities to improve their overall operation. Issue Date: 2014-09-16 URI: http://hdl.handle.net/2142/50732 Rights Information: Copyright 2014 Matthew Harper Date Available in IDEALS: 2014-09-16 Date Deposited: 2014-08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3446163535118103, "perplexity": 2266.1264433250467}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00425-ip-10-171-10-70.ec2.internal.warc.gz"}
|
https://socratic.org/questions/how-do-you-solve-the-following-linear-system-3x-4y-z-1-3x-y-4z-0-x-3y-3z-9#241271
|
Algebra
Topics
# How do you solve the following linear system: 3x + 4y - z =1, 3x - y - 4z = 0, x + 3y - 3z = 9?
$x = - \frac{138}{55}$
$y = \frac{86}{55}$
$z = - \frac{25}{11}$
#### Explanation:
From the given equations
$3 x + 4 y - z = 1$first equation
$3 x - y - 4 z = 0$second equation
$x + 3 y - 3 z = 9$third equation
Let us eliminate x first using first and second equations by subtraction
$3 x + 4 y - z = 1$first equation
$3 x - y - 4 z = 0$second equation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
$0 \cdot x + 5 y + 3 z = 1$
$5 y + 3 z = 1$ fourth equation
Let us eliminate x first using first and third equations by subtraction
$3 x + 4 y - z = 1$first equation
$x + 3 y - 3 z = 9$third equation is also
$3 x + 9 y - 9 z = 27$third equation, after multiplying each term by 3
perform subtraction using the new third equation and the first equation
$3 x + 4 y - z = 1$first equation
$3 x + 9 y - 9 z = 27$third equation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
$0 \cdot x - 5 y + 8 z = - 26$
$- 5 y + 8 z = - 26$ fifth equation
Solve for y and z simultaneously using fourth and fifth equations using addition
$5 y + 3 z = 1$ fourth equation
$- 5 y + 8 z = - 26$ fifth equation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
$0 \cdot y + 11 z = - 25$
$11 z = - 25$
$z = - \frac{25}{11}$
Solve y using $5 y + 3 z = 1$ fourth equation and $z = - \frac{25}{11}$
$5 y + 3 z = 1$ fourth equation
$5 y + 3 \left(- \frac{25}{11}\right) = 1$ fourth equation
$5 y - \frac{75}{11} = 1$
$5 y = \frac{75}{11} + 1$
$5 y = \frac{86}{11}$
$y = \frac{86}{55}$
Solve for x using $3 x + 4 y - z = 1$first equation and $y = \frac{86}{55}$ and $z = - \frac{25}{11}$
$3 x + 4 y - z = 1$first equation
$3 x + 4 \left(\frac{86}{55}\right) - \left(- \frac{25}{11}\right) = 1$first equation
$3 x + \frac{344}{55} + \frac{25}{11} = 1$
$3 x = 1 - \frac{344}{55} - \frac{25}{11}$
$3 x = \frac{55 - 344 - 125}{55}$
$3 x = - \frac{414}{55}$
$x = - \frac{138}{55}$
Check using the original equations
$3 x + 4 y - z = 1$first equation
$3 \left(- \frac{138}{55}\right) + 4 \left(\frac{86}{55}\right) - \left(- \frac{25}{11}\right) = 1$first equation
$- \frac{414}{55} + \frac{344}{55} + \frac{25}{11} = 1$
$\frac{- 414 + 344 + 125}{55} = 1$
$\frac{55}{55} = 1$
$1 = 1$
$3 x - y - 4 z = 0$second equation
$3 \left(- \frac{138}{55}\right) - \left(\frac{86}{55}\right) - 4 \left(- \frac{25}{11}\right) = 0$second equation
$- \frac{414}{55} - \frac{86}{55} + \frac{100}{11} = 0$
$- \frac{500}{55} + \frac{100}{11} = 0$
$0 = 0$
$x + 3 y - 3 z = 9$third equation
$\left(- \frac{138}{55}\right) + 3 \left(\frac{86}{55}\right) - 3 \left(- \frac{25}{11}\right) = 9$third equation
$- \frac{138}{55} + \frac{258}{55} + \frac{75}{11} = 9$
$\frac{120}{55} + \frac{75}{11} = 9$
$\frac{24}{11} + \frac{75}{11} = 9$
$\frac{99}{11} = 9$
$9 = 9$
The solution set is
$x = - \frac{138}{55}$
$y = \frac{86}{55}$
$z = - \frac{25}{11}$
God bless....I hope the explanation is useful.
Mar 18, 2016
$\left(x , y , z\right) = \left(- \frac{138}{55} , \frac{86}{55} , - \frac{25}{11}\right)$
#### Explanation:
We have the three equations:
$\left\{\begin{matrix}3 x + 4 y - z = 1 \text{ "" "" "" "" ""eq. 1" \\ 3x-y-4z=0" "" "" "" "" ""eq. 2" \\ x+3y-3z=9" "" "" "" "" ""eq. 3}\end{matrix}\right.$
Multiply $\text{eq. 3}$ by $- 3$ and add it to $\text{eq. 1}$:
{:(3x+4y-z=1),(ul(-3x-9y+9z=-27" "+)),(-5y+8z=-26" "" "" "" "" ""eq. 4"):}
Multiply $\text{eq. 1}$ by $- 3$ and add it to $\text{eq. 3}$:
{:(-9x-12y+3z=-3),(ul(x+3y-3z=9" "" "" "+)),(-8x-9y=6" "" "" "" "" "" "" ""eq. 5"):}
Multiply $\text{eq. 2}$ by $3$ and add it to $\text{eq. 3}$:
{:(9x-3y-12z=0),(ul(x+3y-3z=9" "" "+)),(10x-15z=9" "" "" "" "" "" "" ""eq. 6"):}
Multiply $\text{eq. 5}$ by $5$ and $\text{eq. 6}$ by $4$ and add the two:
{:(-40x-45y=30),(ul(40x-60z=36" "" "+)),(-45y-60z=66" "" "" "" "" "" ""eq. 7"):}
Multiply $\text{eq. 4}$ by $- 9$ and add it to $\text{eq. 7}$:
{:(45y-72z=234),(ul(-45y-60z=66" "" "+)),(-132z=300" "" "" "" "" "" "" "" ""eq. 8"):}
From $\text{eq. 8}$, we can deduce that
color(red)z=300/(-132)=-150/66=-50/22color(red)(=-25/11
Use this value of $z$ in $\text{eq. 6}$ to solve for $x$:
$10 x - 15 z = 9 \text{ "=>" } 10 x - 15 \left(- \frac{25}{11}\right) = 9$
$\textcolor{w h i t e}{s l} \implies \text{ } 10 x + \frac{375}{11} = 9$
$\textcolor{w h i t e}{s l} \implies \text{ } 10 x + \frac{375}{11} = \frac{99}{11}$
$\textcolor{w h i t e}{s l} \implies \text{ } 10 x = - \frac{276}{11}$
color(white)(sl)=>" "color(red)x=-276/110color(red)(=-138/55
We can also use the value of $z$ we found to solve for $y$ by plugging into $\text{eq. 4} :$
$- 5 y + 8 z = - 26 \text{ "=>" } - 5 y + 8 \left(- \frac{25}{11}\right) = - 26$
$\textcolor{w h i t e}{s l} \implies \text{ } - 5 y - \frac{200}{11} = - 26$
$\textcolor{w h i t e}{s l} \implies \text{ } - 5 y - \frac{200}{11} = - 26$
$\textcolor{w h i t e}{s l} \implies \text{ } - 5 y - \frac{200}{11} = - \frac{286}{11}$
$\textcolor{w h i t e}{s l} \implies \text{ } - 5 y = - \frac{86}{11}$
color(white)(sl)=>" "color(red)(y=86/55
This gives us the solution set of $\left(x , y , z\right)$ as:
color(blue)((-138/55,86/55,-25/11)
##### Impact of this question
4228 views around the world
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 105, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8509330749511719, "perplexity": 6481.769508131196}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361253.38/warc/CC-MAIN-20211202084644-20211202114644-00558.warc.gz"}
|
https://cdsweb.cern.ch/collection/ATLAS%20Internal%20Notes?ln=el&as=1
|
# CERN Accelerating science
RESTRICTED ATLAS COLLECTIONS
If you are an active ATLAS physicist but you are not able to access restricted ATLAS collections on CDS, such as ATLAS Communications or ATLAS Internal Notes, please read the Access to Protected ATLAS Information page or contact ATLAS [email protected].
# ATLAS Internal Notes
Περιορισμός με συλλογή:
[περιορισμένο]
[περιορισμένο]
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.960975706577301, "perplexity": 20430.28161281285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400189928.2/warc/CC-MAIN-20200919013135-20200919043135-00059.warc.gz"}
|
https://www.iacr.org/cryptodb/data/paper.php?pubkey=23226
|
## CryptoDB
### Paper: Effect of the Dependent Paths in Linear Hull
Authors: Zhenli Dai Meiqin Wang Yue Sun URL: http://eprint.iacr.org/2010/325 Search ePrint Search Google Linear Hull is a phenomenon that there are a lot of linear paths with the same data mask but different key masks for a block cipher. In 1994, Nyberg presented the effect on the key-recovery attack such as Algorithm 2 with linear hull, in which the required number of the known plaintexts can be decreased compared with that in the attack using individual linear path. In 2009, Murphy proved that Nyberg's results can only be used to give a lower bound on the data complexity and will be no use on the real linear cryptanalysis. In fact, the linear hull have this kind of positive effect in linear cryptanalysis for some keys instead of the whole key space. So the linear hull can be used to improve the traditional linear cryptanalysis for some weak keys. In the same year, Ohkuma gave the linear hull analysis on PRESENT block cipher, and pointed that there are $32\%$ weak keys of PRESENT which make the bias of a given linear hull with multiple paths more than that of any individual linear path. However, Murphy and Ohkuma have not considered the dependency of the muti-path, and their results are based on the assumption that the linear paths are independent. Actually, most of the linear paths are dependent in the linear hull, and the dependency of the linear paths means the dependency of the equivalent key bits. In this paper, we will analyze the dependency of the linear paths in linear hull and present the real effect of linear hull with the dependent linear paths. Firstly, we give the relation between the bias of a linear hull and its linear paths in linear cryptanalysis. Secondly, we present the algorithm to compute the rate of weak keys corresponding to the expect bias of the linear hull. At last, we verify our algorithm by cryptanalyzing reduced-round of PRESENT. Compared with the rate of weak keys under the assumption of the independent linear paths, the dependency of the linear paths will greatly reduce the rate of weak keys for a given linear hull.
##### BibTeX
@misc{eprint-2010-23226,
title={Effect of the Dependent Paths in Linear Hull},
booktitle={IACR Eprint archive},
keywords={secret-key cryptography / Linear Hull, Dependency of Linear Paths, Weak},
url={http://eprint.iacr.org/2010/325},
note={ [email protected] 14762 received 2 Jun 2010},
author={Zhenli Dai and Meiqin Wang and Yue Sun},
year=2010
}
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7324531078338623, "perplexity": 943.6059037324302}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057091.31/warc/CC-MAIN-20210920191528-20210920221528-00179.warc.gz"}
|
https://www.gradesaver.com/textbooks/science/chemistry/chemistry-7th-edition/chapter-4-reactions-in-aqueous-solution-section-problems-page-148/56
|
# Chapter 4 - Reactions in Aqueous Solution - Section Problems - Page 148: 56
The $HCl$ concentration after the dilution is equal to $1.71$ $M$
#### Work Step by Step
1. Find the concentration after the dilution: $C_1 * V_1 = C_2 * V_2$ $12*35.7= C_2 *250$ $428 = C_2 *250$ $C_2 =1.71$ $M$
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6448324918746948, "perplexity": 1781.0436983720188}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039379601.74/warc/CC-MAIN-20210420060507-20210420090507-00587.warc.gz"}
|
https://quizzerweb.com.ng/learn/study/physics-2017/20/2/
|
# Physics (2017) | Study Mode
Question 11
A
A compressor
B
A refrigerator
C
Air blower
D
Air conditioner
##### Explanation
Compressor is an instrument to supply air or other gas at increased pressure
Refrigerator is an appliance to keep things cold
Air blower is used to release air
Air conditioner is used to remove heat and maintain a stable temperature in an occupied space
Question 12
A
protons
B
electrons
C
ions
D
neutrons
##### Explanation
Isotopy is due to difference in mass number, what causes the difference is the difference in the number of neutrons present
Question 13
A
Balance wheel of a watch
B
Construction of steel rail lines
C
Construction of large steel bridges
D
Fitting of tyres on wheels
##### Explanation
Thermal expansion of a solid is majorly advantageous in the use of bimetallic strip in the balance of a watch so as not to lose time
Question 14
A
kgm
B
m
C
Jm
D
m$$^{-1}$$
##### Explanation
Moment of a force is the product of a force and the perpendicular distance from the line of action of the force
Moment = force $$\times$$ distance
= N $$\times$$ m
= Nm
Question 15
A
80.00%
B
76.60%
C
66.70%
D
60.70%
Question 16
A
0.8N
B
2.5N
C
0.5N
D
1.0N
##### Explanation
Upthrust = change in weight
density = $$\frac{\text{mass}}{\text{volume}}$$
Mass of liquid displaced = Density of Liquid $$\times$$ Volume;
= 103 $$\times$$ 50 $$\times$$ 10-6m3
= 0.05kg
(Note that the volume of 50cm3 was converted to m3 by multiplying by 106) i.e 50cm3 = 50 $$\times$$ 10?6m3)
Since mass displaced = 0.05kg
Upthrust = mg = 0.05 $$\times$$ 10
= 0.5N
Question 17
A
10 cm
B
16 cm
C
5 cm
D
14 cm
##### Explanation
Magnification = $$\frac{\text{Length of camera}}{\text{distance of object from pin hole}}$$
m = $$\frac{12}{60}$$
= 0.2
Also magnification m = $$\frac{\text{Height of image}}{\text{Height of object}}$$
Height of image = m $$\times$$ height of object
= 0.2 $$\times$$ 70
= 14cm
Question 18
A
240v
B
340v
C
480v
D
57600v
##### Explanation
The relationship between the peak value and r.m.s value of electricity supply is given as
V$$_{r.m.s}$$ = $$\frac{V_o}{\sqrt{2}}$$
where Vo = peak voltage
Vo = V$$_{r.m.s}$$ x $$\sqrt{2}$$
= 240 x $$\sqrt{2}$$
= 339.5v
= 340v
Question 19
A
1
B
9
C
3
D
0.33
##### Explanation
Resistor connected in a parallel have an equivalence given by
$$\frac{1}{\text{Reff}}$$ = $$\frac{1}{3}$$ + $$\frac{1}{3}$$ + $$\frac{1}{3}$$
= $$\frac{1 + 1 + 1}{3}$$ = $$\frac{3}{3}$$ = 1
Reff = 1
Question 20
A
number of turns in the coil is increased
B
strength of the field magnet is increased
C
slip rings are replaced with split ring cummutator
D
coil is wound on a soft iron armature
##### Explanation
A d.c generator is one in which its current is allowed to flow in one direction even through it may vary in value an a.c generator can only be made to produce a d.c by replacing the two slip rings with a single split ring or commutator.
Try this quiz in in E-test/CBT Mode
switch to
Question Map
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5242515802383423, "perplexity": 3512.5595530549194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585916.29/warc/CC-MAIN-20211024081003-20211024111003-00023.warc.gz"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.